Test Report: Docker_Linux_containerd 18384

                    
                      818397ea37b8941bfdd3d988b855153c5c099b26:2024-03-14:33567
                    
                

Test fail (1/335)

Order failed test Duration
38 TestAddons/parallel/Registry 16.34
x
+
TestAddons/parallel/Registry (16.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.810612ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rwp7h" [efd2d391-a563-4757-b063-7225ac417042] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005075424s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9glk4" [be4a57b2-95b6-432f-bb91-5c1fbf957c78] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005194203s
addons_test.go:340: (dbg) Run:  kubectl --context addons-130663 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-130663 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-130663 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.650402875s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 ip
2024/03/14 18:01:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-130663 addons disable registry --alsologtostderr -v=1: exit status 11 (346.589911ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:01:38.334704  726661 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:01:38.334958  726661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:01:38.334967  726661 out.go:304] Setting ErrFile to fd 2...
	I0314 18:01:38.334973  726661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:01:38.335169  726661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:01:38.335455  726661 mustload.go:65] Loading cluster: addons-130663
	I0314 18:01:38.335809  726661 config.go:182] Loaded profile config "addons-130663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:01:38.335835  726661 addons.go:597] checking whether the cluster is paused
	I0314 18:01:38.335925  726661 config.go:182] Loaded profile config "addons-130663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:01:38.335938  726661 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:01:38.336346  726661 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:01:38.353097  726661 ssh_runner.go:195] Run: systemctl --version
	I0314 18:01:38.353168  726661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:01:38.369750  726661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:01:38.465641  726661 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0314 18:01:38.465718  726661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:01:38.538553  726661 cri.go:89] found id: "5d5394865d018851a05b3f53214a282c4ed3d5e508b616eb934d2da99b818fa0"
	I0314 18:01:38.538581  726661 cri.go:89] found id: "214aab11d3dbce2a13dbbd76fdab7bbff761d8939848750ea71448417be41148"
	I0314 18:01:38.538587  726661 cri.go:89] found id: "8fbe53bbf5b3a883c04e7fa23503c9d775fc4648f61ce462efc700ef4e8bf0dc"
	I0314 18:01:38.538592  726661 cri.go:89] found id: "f1b18a534ca5db267f32e1847828cc8a4fe0c6fa6074cde4adc3d3674aa0d6b2"
	I0314 18:01:38.538595  726661 cri.go:89] found id: "ca72f607e202b3d07cbded178e3c193a3fb4121aa0fe47a70df902d28f1c7067"
	I0314 18:01:38.538598  726661 cri.go:89] found id: "eedcb9b32a85d1995151d90948863cdb7af5a0ffe7968c32af1ccbb1c769207d"
	I0314 18:01:38.538601  726661 cri.go:89] found id: "abb9631bb67ea0cb929c20b43cfb720b6826e5b2b8e529c8222b8b3b5e6d17de"
	I0314 18:01:38.538604  726661 cri.go:89] found id: "83924663d22906322d4b9c161bd10853b457ea2c696a0627316435e803aea5a0"
	I0314 18:01:38.538606  726661 cri.go:89] found id: "66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba"
	I0314 18:01:38.538612  726661 cri.go:89] found id: "01019d218b30eb7fab122a1af28e90a99c82873192fc51d8395aacb218b5688d"
	I0314 18:01:38.538615  726661 cri.go:89] found id: "5d26828e50f0297206ccf8841a8da1009f6165eedd7286577e5f220441f1ffc6"
	I0314 18:01:38.538617  726661 cri.go:89] found id: "5301f05c5e49148d6d6011ebcb4e14b6b97943a201f349a5716ac78fe896b09f"
	I0314 18:01:38.538620  726661 cri.go:89] found id: "4e62fbc28777fc70de92f29a388a13bb55d972072e001a9294f9c674d695161c"
	I0314 18:01:38.538623  726661 cri.go:89] found id: "e64f6d8fe245c10b8e6528e051501224fa346dd400e95b567932d9ee532bbcd3"
	I0314 18:01:38.538626  726661 cri.go:89] found id: "eb9470166685570f66d73476774f4a5acb7e7df533db44c0ee43380774773fb1"
	I0314 18:01:38.538633  726661 cri.go:89] found id: "5bf5073f06a334ca251af8bde4ae5eaf5d372c5dfdc4d5a45cdeb00bb387449d"
	I0314 18:01:38.538635  726661 cri.go:89] found id: "8449e2b444d12ae448f56bb9fed3106f7a505a132dfabadc8c2e56853e7fa81f"
	I0314 18:01:38.538638  726661 cri.go:89] found id: "2534513c004159ed3d55c12a45ecdb9a6b2b2952356dff17693a40173512238a"
	I0314 18:01:38.538641  726661 cri.go:89] found id: "c118ead8adeab16d2595bf040959a61dd96a2340f5f7fbef3f105fa4fcd6854e"
	I0314 18:01:38.538643  726661 cri.go:89] found id: "fc05e01d28fbf39a12ae1091bb558c738ae21a01f23092fbff919370ec50a02f"
	I0314 18:01:38.538646  726661 cri.go:89] found id: "9859a64a8c5138a39c5a402c3def6f7e75c7cbebbf3b67b0da17510ba6430cac"
	I0314 18:01:38.538648  726661 cri.go:89] found id: "bce11c7af1262684844d7fb48e0808cc04b951161eb6b49a1919f2bb67771cc8"
	I0314 18:01:38.538651  726661 cri.go:89] found id: ""
	I0314 18:01:38.538689  726661 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0314 18:01:38.611242  726661 out.go:177] 
	W0314 18:01:38.612567  726661 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-14T18:01:38Z" level=error msg="stat /run/containerd/runc/k8s.io/66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-14T18:01:38Z" level=error msg="stat /run/containerd/runc/k8s.io/66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba: no such file or directory"
	
	W0314 18:01:38.612587  726661 out.go:239] * 
	* 
	W0314 18:01:38.616787  726661 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 18:01:38.618437  726661 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:390: failed to disable registry addon. args "out/minikube-linux-amd64 -p addons-130663 addons disable registry --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-130663
helpers_test.go:235: (dbg) docker inspect addons-130663:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93faca73061830f26b1db4921efde12fd97d601ac3a089d27707e1de85e34a1",
	        "Created": "2024-03-14T17:59:37.534474698Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-14T17:59:37.788091385Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:824841ec881aeec3697aa896b6eaaaed4a34726d2ba99ff4b9ca0b12f150022e",
	        "ResolvConfPath": "/var/lib/docker/containers/a93faca73061830f26b1db4921efde12fd97d601ac3a089d27707e1de85e34a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93faca73061830f26b1db4921efde12fd97d601ac3a089d27707e1de85e34a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93faca73061830f26b1db4921efde12fd97d601ac3a089d27707e1de85e34a1/hosts",
	        "LogPath": "/var/lib/docker/containers/a93faca73061830f26b1db4921efde12fd97d601ac3a089d27707e1de85e34a1/a93faca73061830f26b1db4921efde12fd97d601ac3a089d27707e1de85e34a1-json.log",
	        "Name": "/addons-130663",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-130663:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-130663",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a23da1bd0cc647426e327db5f685ed00590ee312f37ce31c3068dc4b6a207da4-init/diff:/var/lib/docker/overlay2/00b96f13f59b455187a477f3cf7a9264ace29c5c08b84e9bba9b6f1dc02b0737/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a23da1bd0cc647426e327db5f685ed00590ee312f37ce31c3068dc4b6a207da4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a23da1bd0cc647426e327db5f685ed00590ee312f37ce31c3068dc4b6a207da4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a23da1bd0cc647426e327db5f685ed00590ee312f37ce31c3068dc4b6a207da4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-130663",
	                "Source": "/var/lib/docker/volumes/addons-130663/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-130663",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-130663",
	                "name.minikube.sigs.k8s.io": "addons-130663",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c711818111141f616ccc8f2d7595e06501ee23c5431264cbbb668222082f1759",
	            "SandboxKey": "/var/run/docker/netns/c71181811114",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-130663": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a93faca73061",
	                        "addons-130663"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fbb4273d5b535fad506ad268239b76a2c07edb431c4aae298f33449336af6e77",
	                    "EndpointID": "687ad7b2b7ab1826b8d2c223891096aca3936ebf107424286f1ea2d738725a7d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-130663",
	                        "a93faca73061"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-130663 -n addons-130663
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-130663 logs -n 25: (1.96098767s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-765000   | jenkins | v1.32.0 | 14 Mar 24 17:58 UTC |                     |
	|         | -p download-only-765000              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-765000              | download-only-765000   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| start   | -o=json --download-only              | download-only-484464   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | -p download-only-484464              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-484464              | download-only-484464   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| start   | -o=json --download-only              | download-only-095108   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | -p download-only-095108              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-095108              | download-only-095108   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-765000              | download-only-765000   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-484464              | download-only-484464   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-095108              | download-only-095108   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| start   | --download-only -p                   | download-docker-106123 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | download-docker-106123               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-106123            | download-docker-106123 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| start   | --download-only -p                   | binary-mirror-642543   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | binary-mirror-642543                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43189               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-642543              | binary-mirror-642543   | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| addons  | disable dashboard -p                 | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | addons-130663                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | addons-130663                        |                        |         |         |                     |                     |
	| start   | -p addons-130663 --wait=true         | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 18:01 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 18:01 UTC | 14 Mar 24 18:01 UTC |
	|         | -p addons-130663                     |                        |         |         |                     |                     |
	| addons  | addons-130663 addons disable         | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 18:01 UTC | 14 Mar 24 18:01 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-130663 addons                 | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 18:01 UTC | 14 Mar 24 18:01 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 18:01 UTC | 14 Mar 24 18:01 UTC |
	|         | -p addons-130663                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-130663 ip                     | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 18:01 UTC | 14 Mar 24 18:01 UTC |
	| addons  | addons-130663 addons disable         | addons-130663          | jenkins | v1.32.0 | 14 Mar 24 18:01 UTC |                     |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:59:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:59:16.043912  716907 out.go:291] Setting OutFile to fd 1 ...
	I0314 17:59:16.044263  716907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:59:16.044274  716907 out.go:304] Setting ErrFile to fd 2...
	I0314 17:59:16.044279  716907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:59:16.044436  716907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 17:59:16.045027  716907 out.go:298] Setting JSON to false
	I0314 17:59:16.046031  716907 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9707,"bootTime":1710429449,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 17:59:16.046095  716907 start.go:139] virtualization: kvm guest
	I0314 17:59:16.048102  716907 out.go:177] * [addons-130663] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 17:59:16.049422  716907 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 17:59:16.049434  716907 notify.go:220] Checking for updates...
	I0314 17:59:16.050629  716907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 17:59:16.051913  716907 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 17:59:16.053218  716907 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	I0314 17:59:16.054504  716907 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 17:59:16.055900  716907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 17:59:16.057408  716907 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:59:16.077500  716907 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 17:59:16.077604  716907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:59:16.125960  716907 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-14 17:59:16.11356765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:59:16.126063  716907 docker.go:295] overlay module found
	I0314 17:59:16.127772  716907 out.go:177] * Using the docker driver based on user configuration
	I0314 17:59:16.129076  716907 start.go:297] selected driver: docker
	I0314 17:59:16.129089  716907 start.go:901] validating driver "docker" against <nil>
	I0314 17:59:16.129103  716907 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 17:59:16.129980  716907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:59:16.176252  716907 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-14 17:59:16.167528502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:59:16.176471  716907 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:59:16.176770  716907 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 17:59:16.178460  716907 out.go:177] * Using Docker driver with root privileges
	I0314 17:59:16.179956  716907 cni.go:84] Creating CNI manager for ""
	I0314 17:59:16.179978  716907 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 17:59:16.179989  716907 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 17:59:16.180078  716907 start.go:340] cluster config:
	{Name:addons-130663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-130663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 17:59:16.181394  716907 out.go:177] * Starting "addons-130663" primary control-plane node in "addons-130663" cluster
	I0314 17:59:16.182596  716907 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 17:59:16.183779  716907 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0314 17:59:16.185002  716907 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 17:59:16.185033  716907 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0314 17:59:16.185028  716907 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 17:59:16.185042  716907 cache.go:56] Caching tarball of preloaded images
	I0314 17:59:16.185119  716907 preload.go:173] Found /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0314 17:59:16.185130  716907 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0314 17:59:16.185483  716907 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/config.json ...
	I0314 17:59:16.185513  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/config.json: {Name:mkcdaf3a11a2f18f48b96e187856c51941868214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:16.200157  716907 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 17:59:16.200262  716907 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 17:59:16.200277  716907 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 17:59:16.200281  716907 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 17:59:16.200288  716907 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 17:59:16.200295  716907 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from local cache
	I0314 17:59:29.138942  716907 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from cached tarball
	I0314 17:59:29.138991  716907 cache.go:194] Successfully downloaded all kic artifacts
	I0314 17:59:29.139038  716907 start.go:360] acquireMachinesLock for addons-130663: {Name:mke0bb4033b255a4e9d388e079412630550cab0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 17:59:29.139161  716907 start.go:364] duration metric: took 99.226µs to acquireMachinesLock for "addons-130663"
	I0314 17:59:29.139195  716907 start.go:93] Provisioning new machine with config: &{Name:addons-130663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-130663 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0314 17:59:29.139291  716907 start.go:125] createHost starting for "" (driver="docker")
	I0314 17:59:29.242284  716907 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0314 17:59:29.242730  716907 start.go:159] libmachine.API.Create for "addons-130663" (driver="docker")
	I0314 17:59:29.242783  716907 client.go:168] LocalClient.Create starting
	I0314 17:59:29.242928  716907 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca.pem
	I0314 17:59:29.429624  716907 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/cert.pem
	I0314 17:59:29.604166  716907 cli_runner.go:164] Run: docker network inspect addons-130663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0314 17:59:29.620533  716907 cli_runner.go:211] docker network inspect addons-130663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0314 17:59:29.620613  716907 network_create.go:281] running [docker network inspect addons-130663] to gather additional debugging logs...
	I0314 17:59:29.620639  716907 cli_runner.go:164] Run: docker network inspect addons-130663
	W0314 17:59:29.636500  716907 cli_runner.go:211] docker network inspect addons-130663 returned with exit code 1
	I0314 17:59:29.636597  716907 network_create.go:284] error running [docker network inspect addons-130663]: docker network inspect addons-130663: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-130663 not found
	I0314 17:59:29.636612  716907 network_create.go:286] output of [docker network inspect addons-130663]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-130663 not found
	
	** /stderr **
	I0314 17:59:29.636741  716907 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 17:59:29.653194  716907 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002e51db0}
	I0314 17:59:29.653262  716907 network_create.go:124] attempt to create docker network addons-130663 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0314 17:59:29.653328  716907 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-130663 addons-130663
	I0314 17:59:29.991408  716907 network_create.go:108] docker network addons-130663 192.168.49.0/24 created
	I0314 17:59:29.991445  716907 kic.go:121] calculated static IP "192.168.49.2" for the "addons-130663" container
	I0314 17:59:29.991513  716907 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0314 17:59:30.006602  716907 cli_runner.go:164] Run: docker volume create addons-130663 --label name.minikube.sigs.k8s.io=addons-130663 --label created_by.minikube.sigs.k8s.io=true
	I0314 17:59:30.113306  716907 oci.go:103] Successfully created a docker volume addons-130663
	I0314 17:59:30.113454  716907 cli_runner.go:164] Run: docker run --rm --name addons-130663-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-130663 --entrypoint /usr/bin/test -v addons-130663:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0314 17:59:32.376617  716907 cli_runner.go:217] Completed: docker run --rm --name addons-130663-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-130663 --entrypoint /usr/bin/test -v addons-130663:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib: (2.263109512s)
	I0314 17:59:32.376654  716907 oci.go:107] Successfully prepared a docker volume addons-130663
	I0314 17:59:32.376690  716907 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 17:59:32.376715  716907 kic.go:194] Starting extracting preloaded images to volume ...
	I0314 17:59:32.376782  716907 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-130663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0314 17:59:37.472673  716907 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-130663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (5.095821228s)
	I0314 17:59:37.472724  716907 kic.go:203] duration metric: took 5.09600345s to extract preloaded images to volume ...
	W0314 17:59:37.472904  716907 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0314 17:59:37.473040  716907 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0314 17:59:37.519731  716907 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-130663 --name addons-130663 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-130663 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-130663 --network addons-130663 --ip 192.168.49.2 --volume addons-130663:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0314 17:59:37.795536  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Running}}
	I0314 17:59:37.812898  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 17:59:37.829543  716907 cli_runner.go:164] Run: docker exec addons-130663 stat /var/lib/dpkg/alternatives/iptables
	I0314 17:59:37.869258  716907 oci.go:144] the created container "addons-130663" has a running status.
	I0314 17:59:37.869290  716907 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa...
	I0314 17:59:37.947323  716907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0314 17:59:37.968033  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 17:59:37.984991  716907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0314 17:59:37.985014  716907 kic_runner.go:114] Args: [docker exec --privileged addons-130663 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0314 17:59:38.024544  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 17:59:38.046227  716907 machine.go:94] provisionDockerMachine start ...
	I0314 17:59:38.046344  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:38.064018  716907 main.go:141] libmachine: Using SSH client type: native
	I0314 17:59:38.064295  716907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I0314 17:59:38.064316  716907 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 17:59:38.064912  716907 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33994->127.0.0.1:33512: read: connection reset by peer
	I0314 17:59:41.192862  716907 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-130663
	
	I0314 17:59:41.192898  716907 ubuntu.go:169] provisioning hostname "addons-130663"
	I0314 17:59:41.192981  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:41.209685  716907 main.go:141] libmachine: Using SSH client type: native
	I0314 17:59:41.209873  716907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I0314 17:59:41.209885  716907 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-130663 && echo "addons-130663" | sudo tee /etc/hostname
	I0314 17:59:41.352344  716907 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-130663
	
	I0314 17:59:41.352432  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:41.369946  716907 main.go:141] libmachine: Using SSH client type: native
	I0314 17:59:41.370160  716907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I0314 17:59:41.370183  716907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-130663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-130663/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-130663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 17:59:41.497367  716907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 17:59:41.497399  716907 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18384-708595/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-708595/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-708595/.minikube}
	I0314 17:59:41.497419  716907 ubuntu.go:177] setting up certificates
	I0314 17:59:41.497432  716907 provision.go:84] configureAuth start
	I0314 17:59:41.497494  716907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-130663
	I0314 17:59:41.513644  716907 provision.go:143] copyHostCerts
	I0314 17:59:41.513733  716907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-708595/.minikube/cert.pem (1123 bytes)
	I0314 17:59:41.513872  716907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-708595/.minikube/key.pem (1671 bytes)
	I0314 17:59:41.513982  716907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-708595/.minikube/ca.pem (1078 bytes)
	I0314 17:59:41.514077  716907 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-708595/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca-key.pem org=jenkins.addons-130663 san=[127.0.0.1 192.168.49.2 addons-130663 localhost minikube]
	I0314 17:59:41.685767  716907 provision.go:177] copyRemoteCerts
	I0314 17:59:41.685839  716907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 17:59:41.685905  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:41.701837  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 17:59:41.793921  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 17:59:41.816024  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 17:59:41.837009  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 17:59:41.858436  716907 provision.go:87] duration metric: took 360.989049ms to configureAuth
	I0314 17:59:41.858484  716907 ubuntu.go:193] setting minikube options for container-runtime
	I0314 17:59:41.858687  716907 config.go:182] Loaded profile config "addons-130663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 17:59:41.858707  716907 machine.go:97] duration metric: took 3.812450807s to provisionDockerMachine
	I0314 17:59:41.858717  716907 client.go:171] duration metric: took 12.615922845s to LocalClient.Create
	I0314 17:59:41.858743  716907 start.go:167] duration metric: took 12.616019228s to libmachine.API.Create "addons-130663"
	I0314 17:59:41.858757  716907 start.go:293] postStartSetup for "addons-130663" (driver="docker")
	I0314 17:59:41.858773  716907 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 17:59:41.858835  716907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 17:59:41.858883  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:41.876886  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 17:59:41.970613  716907 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 17:59:41.974096  716907 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0314 17:59:41.974138  716907 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0314 17:59:41.974146  716907 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0314 17:59:41.974154  716907 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0314 17:59:41.974167  716907 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-708595/.minikube/addons for local assets ...
	I0314 17:59:41.974219  716907 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-708595/.minikube/files for local assets ...
	I0314 17:59:41.974244  716907 start.go:296] duration metric: took 115.478707ms for postStartSetup
	I0314 17:59:41.974528  716907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-130663
	I0314 17:59:41.990635  716907 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/config.json ...
	I0314 17:59:41.990891  716907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 17:59:41.990932  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:42.006744  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 17:59:42.098200  716907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0314 17:59:42.102300  716907 start.go:128] duration metric: took 12.962991781s to createHost
	I0314 17:59:42.102325  716907 start.go:83] releasing machines lock for "addons-130663", held for 12.963148894s
	I0314 17:59:42.102397  716907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-130663
	I0314 17:59:42.118317  716907 ssh_runner.go:195] Run: cat /version.json
	I0314 17:59:42.118377  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:42.118458  716907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 17:59:42.118552  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 17:59:42.133876  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 17:59:42.135187  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 17:59:42.302453  716907 ssh_runner.go:195] Run: systemctl --version
	I0314 17:59:42.306767  716907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 17:59:42.311033  716907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0314 17:59:42.334003  716907 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0314 17:59:42.334091  716907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 17:59:42.359590  716907 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0314 17:59:42.359625  716907 start.go:494] detecting cgroup driver to use...
	I0314 17:59:42.359664  716907 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 17:59:42.359705  716907 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 17:59:42.371068  716907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 17:59:42.383461  716907 docker.go:217] disabling cri-docker service (if available) ...
	I0314 17:59:42.383531  716907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 17:59:42.395592  716907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 17:59:42.408150  716907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 17:59:42.490406  716907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 17:59:42.565441  716907 docker.go:233] disabling docker service ...
	I0314 17:59:42.565523  716907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 17:59:42.583423  716907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 17:59:42.593495  716907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 17:59:42.666043  716907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 17:59:42.745536  716907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 17:59:42.756292  716907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 17:59:42.771100  716907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 17:59:42.780197  716907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 17:59:42.789474  716907 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 17:59:42.789548  716907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 17:59:42.798428  716907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 17:59:42.807542  716907 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 17:59:42.816377  716907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 17:59:42.825502  716907 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 17:59:42.833896  716907 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 17:59:42.842881  716907 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 17:59:42.850916  716907 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 17:59:42.858851  716907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:59:42.929856  716907 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 17:59:43.031979  716907 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0314 17:59:43.032078  716907 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0314 17:59:43.035582  716907 start.go:562] Will wait 60s for crictl version
	I0314 17:59:43.035627  716907 ssh_runner.go:195] Run: which crictl
	I0314 17:59:43.038873  716907 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 17:59:43.071582  716907 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0314 17:59:43.071655  716907 ssh_runner.go:195] Run: containerd --version
	I0314 17:59:43.094237  716907 ssh_runner.go:195] Run: containerd --version
	I0314 17:59:43.119201  716907 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0314 17:59:43.120570  716907 cli_runner.go:164] Run: docker network inspect addons-130663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 17:59:43.136027  716907 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0314 17:59:43.139556  716907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 17:59:43.149843  716907 kubeadm.go:877] updating cluster {Name:addons-130663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-130663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 17:59:43.149990  716907 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 17:59:43.150059  716907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 17:59:43.182226  716907 containerd.go:612] all images are preloaded for containerd runtime.
	I0314 17:59:43.182250  716907 containerd.go:519] Images already preloaded, skipping extraction
	I0314 17:59:43.182299  716907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 17:59:43.213909  716907 containerd.go:612] all images are preloaded for containerd runtime.
	I0314 17:59:43.213933  716907 cache_images.go:84] Images are preloaded, skipping loading
	I0314 17:59:43.213942  716907 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0314 17:59:43.214056  716907 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-130663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-130663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 17:59:43.214111  716907 ssh_runner.go:195] Run: sudo crictl info
	I0314 17:59:43.248356  716907 cni.go:84] Creating CNI manager for ""
	I0314 17:59:43.248383  716907 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 17:59:43.248399  716907 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 17:59:43.248429  716907 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-130663 NodeName:addons-130663 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 17:59:43.248575  716907 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-130663"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 17:59:43.248653  716907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 17:59:43.257003  716907 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 17:59:43.257066  716907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 17:59:43.264956  716907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 17:59:43.281037  716907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 17:59:43.297017  716907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0314 17:59:43.313208  716907 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0314 17:59:43.316575  716907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 17:59:43.326908  716907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 17:59:43.397564  716907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 17:59:43.410404  716907 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663 for IP: 192.168.49.2
	I0314 17:59:43.410427  716907 certs.go:194] generating shared ca certs ...
	I0314 17:59:43.410444  716907 certs.go:226] acquiring lock for ca certs: {Name:mk4682923d58a5b9720305bd469cd2d30b0bfde1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.410578  716907 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-708595/.minikube/ca.key
	I0314 17:59:43.592141  716907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-708595/.minikube/ca.crt ...
	I0314 17:59:43.592177  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/ca.crt: {Name:mk833fe51fba4da790285403a848bfce49430530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.592386  716907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-708595/.minikube/ca.key ...
	I0314 17:59:43.592401  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/ca.key: {Name:mk1363d6f7533bbef31a752bf2e75dfa9dc7f508 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.592519  716907 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-708595/.minikube/proxy-client-ca.key
	I0314 17:59:43.666172  716907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-708595/.minikube/proxy-client-ca.crt ...
	I0314 17:59:43.666210  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/proxy-client-ca.crt: {Name:mk99280c977dd31d0cedffbc4f55a7cd2cb6201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.666432  716907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-708595/.minikube/proxy-client-ca.key ...
	I0314 17:59:43.666451  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/proxy-client-ca.key: {Name:mk0b76bef6988d80e5e0a9eab259e1a58b93db99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.666585  716907 certs.go:256] generating profile certs ...
	I0314 17:59:43.666650  716907 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.key
	I0314 17:59:43.666665  716907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt with IP's: []
	I0314 17:59:43.726264  716907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt ...
	I0314 17:59:43.726304  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: {Name:mk41be02e6cdc1872fa20b4078ca7222584a8d39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.726486  716907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.key ...
	I0314 17:59:43.726498  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.key: {Name:mke56ed458bcf7bd7469daac4137c01f482a5452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.726567  716907 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.key.9cf5b438
	I0314 17:59:43.726586  716907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.crt.9cf5b438 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0314 17:59:43.966743  716907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.crt.9cf5b438 ...
	I0314 17:59:43.966779  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.crt.9cf5b438: {Name:mk9753b578765a64b0be35289a54e60864640df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.966941  716907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.key.9cf5b438 ...
	I0314 17:59:43.966954  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.key.9cf5b438: {Name:mkaa57a7957dbc1826b5012c8588a3ac4cd04a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:43.967030  716907 certs.go:381] copying /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.crt.9cf5b438 -> /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.crt
	I0314 17:59:43.967139  716907 certs.go:385] copying /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.key.9cf5b438 -> /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.key
	I0314 17:59:43.967207  716907 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.key
	I0314 17:59:43.967238  716907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.crt with IP's: []
	I0314 17:59:44.015625  716907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.crt ...
	I0314 17:59:44.015656  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.crt: {Name:mk953d737330047bbfb3139e49601ec0be8a558a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:44.015814  716907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.key ...
	I0314 17:59:44.015827  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.key: {Name:mkeb34e7c7bb25a504b701527248b56f7a41bb45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:44.015997  716907 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca-key.pem (1675 bytes)
	I0314 17:59:44.016033  716907 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/ca.pem (1078 bytes)
	I0314 17:59:44.016056  716907 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/cert.pem (1123 bytes)
	I0314 17:59:44.016078  716907 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-708595/.minikube/certs/key.pem (1671 bytes)
	I0314 17:59:44.016737  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 17:59:44.039160  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 17:59:44.060278  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 17:59:44.082378  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 17:59:44.103641  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0314 17:59:44.125849  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 17:59:44.149631  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 17:59:44.171651  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 17:59:44.193616  716907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-708595/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 17:59:44.215247  716907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 17:59:44.231332  716907 ssh_runner.go:195] Run: openssl version
	I0314 17:59:44.236263  716907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 17:59:44.244757  716907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 17:59:44.247919  716907 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 17:59 /usr/share/ca-certificates/minikubeCA.pem
	I0314 17:59:44.247973  716907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 17:59:44.254447  716907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 17:59:44.263219  716907 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 17:59:44.266374  716907 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 17:59:44.266424  716907 kubeadm.go:391] StartCluster: {Name:addons-130663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-130663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 17:59:44.266509  716907 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0314 17:59:44.266582  716907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 17:59:44.299764  716907 cri.go:89] found id: ""
	I0314 17:59:44.299830  716907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 17:59:44.308150  716907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 17:59:44.316557  716907 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0314 17:59:44.316610  716907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 17:59:44.324595  716907 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 17:59:44.324615  716907 kubeadm.go:156] found existing configuration files:
	
	I0314 17:59:44.324679  716907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 17:59:44.332470  716907 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 17:59:44.332521  716907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 17:59:44.340171  716907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 17:59:44.347893  716907 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 17:59:44.347944  716907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 17:59:44.355482  716907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 17:59:44.362991  716907 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 17:59:44.363044  716907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 17:59:44.370191  716907 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 17:59:44.377642  716907 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 17:59:44.377690  716907 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 17:59:44.385936  716907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0314 17:59:44.427658  716907 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 17:59:44.427741  716907 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 17:59:44.463101  716907 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0314 17:59:44.463229  716907 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1053-gcp
	I0314 17:59:44.463312  716907 kubeadm.go:309] OS: Linux
	I0314 17:59:44.463390  716907 kubeadm.go:309] CGROUPS_CPU: enabled
	I0314 17:59:44.463465  716907 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0314 17:59:44.463554  716907 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0314 17:59:44.463636  716907 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0314 17:59:44.463699  716907 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0314 17:59:44.463766  716907 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0314 17:59:44.463838  716907 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0314 17:59:44.463912  716907 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0314 17:59:44.463979  716907 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0314 17:59:44.525274  716907 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 17:59:44.525420  716907 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 17:59:44.525530  716907 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 17:59:44.715099  716907 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 17:59:44.719116  716907 out.go:204]   - Generating certificates and keys ...
	I0314 17:59:44.719219  716907 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 17:59:44.719300  716907 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 17:59:44.821228  716907 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 17:59:45.006071  716907 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 17:59:45.263673  716907 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 17:59:45.367496  716907 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 17:59:45.510137  716907 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 17:59:45.510360  716907 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-130663 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0314 17:59:45.801233  716907 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 17:59:45.801407  716907 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-130663 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0314 17:59:45.970090  716907 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 17:59:46.134334  716907 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 17:59:46.195958  716907 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 17:59:46.196059  716907 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 17:59:46.406306  716907 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 17:59:46.562934  716907 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 17:59:46.747270  716907 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 17:59:46.846539  716907 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 17:59:46.847023  716907 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 17:59:46.849259  716907 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 17:59:46.851271  716907 out.go:204]   - Booting up control plane ...
	I0314 17:59:46.851386  716907 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 17:59:46.851492  716907 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 17:59:46.851578  716907 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 17:59:46.860198  716907 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 17:59:46.860904  716907 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 17:59:46.860964  716907 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 17:59:46.938760  716907 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 17:59:51.941108  716907 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.002442 seconds
	I0314 17:59:51.941239  716907 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 17:59:51.952475  716907 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 17:59:52.471019  716907 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 17:59:52.471296  716907 kubeadm.go:309] [mark-control-plane] Marking the node addons-130663 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 17:59:52.980770  716907 kubeadm.go:309] [bootstrap-token] Using token: ahcexu.ir6y84wlb8a16ca5
	I0314 17:59:52.982153  716907 out.go:204]   - Configuring RBAC rules ...
	I0314 17:59:52.982280  716907 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 17:59:52.986212  716907 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 17:59:52.992264  716907 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 17:59:52.994683  716907 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 17:59:52.998102  716907 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 17:59:53.000743  716907 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 17:59:53.010973  716907 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 17:59:53.229020  716907 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 17:59:53.403338  716907 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 17:59:53.404675  716907 kubeadm.go:309] 
	I0314 17:59:53.404787  716907 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 17:59:53.404834  716907 kubeadm.go:309] 
	I0314 17:59:53.404925  716907 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 17:59:53.404936  716907 kubeadm.go:309] 
	I0314 17:59:53.404967  716907 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 17:59:53.405053  716907 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 17:59:53.405117  716907 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 17:59:53.405129  716907 kubeadm.go:309] 
	I0314 17:59:53.405202  716907 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 17:59:53.405213  716907 kubeadm.go:309] 
	I0314 17:59:53.405278  716907 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 17:59:53.405291  716907 kubeadm.go:309] 
	I0314 17:59:53.405374  716907 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 17:59:53.405476  716907 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 17:59:53.405562  716907 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 17:59:53.405574  716907 kubeadm.go:309] 
	I0314 17:59:53.405670  716907 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 17:59:53.405767  716907 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 17:59:53.405779  716907 kubeadm.go:309] 
	I0314 17:59:53.405881  716907 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ahcexu.ir6y84wlb8a16ca5 \
	I0314 17:59:53.406023  716907 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9197fe7bd2e02ad29e6067f2076a61ca3c48856d1be80c00b1f531a823d9d623 \
	I0314 17:59:53.406063  716907 kubeadm.go:309] 	--control-plane 
	I0314 17:59:53.406069  716907 kubeadm.go:309] 
	I0314 17:59:53.406188  716907 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 17:59:53.406200  716907 kubeadm.go:309] 
	I0314 17:59:53.406307  716907 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ahcexu.ir6y84wlb8a16ca5 \
	I0314 17:59:53.406447  716907 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9197fe7bd2e02ad29e6067f2076a61ca3c48856d1be80c00b1f531a823d9d623 
	I0314 17:59:53.408987  716907 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-gcp\n", err: exit status 1
	I0314 17:59:53.409184  716907 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 17:59:53.409226  716907 cni.go:84] Creating CNI manager for ""
	I0314 17:59:53.409248  716907 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 17:59:53.411119  716907 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 17:59:53.412606  716907 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 17:59:53.417743  716907 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 17:59:53.417773  716907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 17:59:53.438008  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 17:59:54.191602  716907 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 17:59:54.191706  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:54.191774  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-130663 minikube.k8s.io/updated_at=2024_03_14T17_59_54_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=addons-130663 minikube.k8s.io/primary=true
	I0314 17:59:54.199644  716907 ops.go:34] apiserver oom_adj: -16
	I0314 17:59:54.261208  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:54.761961  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:55.262066  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:55.761393  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:56.261834  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:56.762034  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:57.261623  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:57.761458  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:58.262004  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:58.762192  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:59.261210  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 17:59:59.761900  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:00.262242  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:00.762247  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:01.261914  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:01.761516  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:02.261556  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:02.761497  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:03.262093  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:03.761373  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:04.261611  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:04.761927  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:05.261933  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:05.761424  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:06.262080  716907 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:00:06.444374  716907 kubeadm.go:1106] duration metric: took 12.252738116s to wait for elevateKubeSystemPrivileges
	W0314 18:00:06.444415  716907 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 18:00:06.444425  716907 kubeadm.go:393] duration metric: took 22.178007806s to StartCluster
	I0314 18:00:06.444444  716907 settings.go:142] acquiring lock: {Name:mkbef0ca53582414a92687e8aca488ee962c66ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:00:06.444550  716907 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 18:00:06.444900  716907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/kubeconfig: {Name:mk8e4f5a4cd269e035c8691bd4daa6ae58bfc9bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:00:06.445089  716907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 18:00:06.445124  716907 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0314 18:00:06.447143  716907 out.go:177] * Verifying Kubernetes components...
	I0314 18:00:06.445197  716907 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0314 18:00:06.445372  716907 config.go:182] Loaded profile config "addons-130663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:00:06.448365  716907 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:00:06.448378  716907 addons.go:69] Setting gcp-auth=true in profile "addons-130663"
	I0314 18:00:06.448378  716907 addons.go:69] Setting default-storageclass=true in profile "addons-130663"
	I0314 18:00:06.448384  716907 addons.go:69] Setting ingress-dns=true in profile "addons-130663"
	I0314 18:00:06.448403  716907 mustload.go:65] Loading cluster: addons-130663
	I0314 18:00:06.448390  716907 addons.go:69] Setting cloud-spanner=true in profile "addons-130663"
	I0314 18:00:06.448413  716907 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-130663"
	I0314 18:00:06.448418  716907 addons.go:234] Setting addon ingress-dns=true in "addons-130663"
	I0314 18:00:06.448366  716907 addons.go:69] Setting yakd=true in profile "addons-130663"
	I0314 18:00:06.448430  716907 addons.go:234] Setting addon cloud-spanner=true in "addons-130663"
	I0314 18:00:06.448451  716907 addons.go:234] Setting addon yakd=true in "addons-130663"
	I0314 18:00:06.448446  716907 addons.go:69] Setting registry=true in profile "addons-130663"
	I0314 18:00:06.448470  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.448475  716907 addons.go:69] Setting storage-provisioner=true in profile "addons-130663"
	I0314 18:00:06.448479  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.448474  716907 addons.go:69] Setting metrics-server=true in profile "addons-130663"
	I0314 18:00:06.448490  716907 addons.go:234] Setting addon registry=true in "addons-130663"
	I0314 18:00:06.448494  716907 addons.go:234] Setting addon storage-provisioner=true in "addons-130663"
	I0314 18:00:06.448511  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.448516  716907 addons.go:234] Setting addon metrics-server=true in "addons-130663"
	I0314 18:00:06.448526  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.448545  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.448589  716907 config.go:182] Loaded profile config "addons-130663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:00:06.448787  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.448828  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.448974  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.448993  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.449002  716907 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-130663"
	I0314 18:00:06.449027  716907 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-130663"
	I0314 18:00:06.449036  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.449298  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.448376  716907 addons.go:69] Setting ingress=true in profile "addons-130663"
	I0314 18:00:06.449504  716907 addons.go:234] Setting addon ingress=true in "addons-130663"
	I0314 18:00:06.449542  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.449549  716907 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-130663"
	I0314 18:00:06.448996  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.449579  716907 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-130663"
	I0314 18:00:06.449619  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.450012  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.450079  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.448366  716907 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-130663"
	I0314 18:00:06.450187  716907 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-130663"
	I0314 18:00:06.450221  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.450642  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.450965  716907 addons.go:69] Setting inspektor-gadget=true in profile "addons-130663"
	I0314 18:00:06.450995  716907 addons.go:234] Setting addon inspektor-gadget=true in "addons-130663"
	I0314 18:00:06.451025  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.451746  716907 addons.go:69] Setting volumesnapshots=true in profile "addons-130663"
	I0314 18:00:06.448367  716907 addons.go:69] Setting helm-tiller=true in profile "addons-130663"
	I0314 18:00:06.448470  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.454186  716907 addons.go:234] Setting addon helm-tiller=true in "addons-130663"
	I0314 18:00:06.454279  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.454705  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.450050  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.454792  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.454209  716907 addons.go:234] Setting addon volumesnapshots=true in "addons-130663"
	I0314 18:00:06.457646  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.469958  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.474409  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.486219  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.494422  716907 out.go:177]   - Using image docker.io/registry:2.8.3
	I0314 18:00:06.498534  716907 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0314 18:00:06.500132  716907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:00:06.501646  716907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0314 18:00:06.503372  716907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:00:06.501937  716907 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0314 18:00:06.501953  716907 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0314 18:00:06.506851  716907 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0314 18:00:06.505111  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0314 18:00:06.505317  716907 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 18:00:06.507105  716907 addons.go:234] Setting addon default-storageclass=true in "addons-130663"
	I0314 18:00:06.509454  716907 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0314 18:00:06.509474  716907 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-130663"
	I0314 18:00:06.510280  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.512768  716907 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0314 18:00:06.514110  716907 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0314 18:00:06.514127  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0314 18:00:06.514184  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.513675  716907 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 18:00:06.514479  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 18:00:06.514524  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.513691  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0314 18:00:06.514697  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.513729  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.515289  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.513735  716907 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:00:06.513740  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0314 18:00:06.513754  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:06.517510  716907 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:00:06.517526  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:00:06.517612  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.520059  716907 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0314 18:00:06.518086  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.519513  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:06.521633  716907 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0314 18:00:06.521652  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0314 18:00:06.521698  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.538921  716907 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0314 18:00:06.540454  716907 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 18:00:06.540480  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0314 18:00:06.540551  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.546003  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0314 18:00:06.547252  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0314 18:00:06.548453  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0314 18:00:06.549708  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0314 18:00:06.550949  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0314 18:00:06.552341  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0314 18:00:06.553720  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0314 18:00:06.555061  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0314 18:00:06.556799  716907 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0314 18:00:06.556818  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0314 18:00:06.556883  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.562247  716907 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0314 18:00:06.563375  716907 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0314 18:00:06.563394  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0314 18:00:06.563452  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.565253  716907 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0314 18:00:06.567009  716907 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 18:00:06.567029  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0314 18:00:06.567094  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.575975  716907 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0314 18:00:06.575323  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.575337  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.575392  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.575506  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.578623  716907 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0314 18:00:06.578647  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0314 18:00:06.578718  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.578779  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.579211  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.587266  716907 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:00:06.587282  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:00:06.587323  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.595734  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.599896  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.619735  716907 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0314 18:00:06.615392  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.625356  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.627528  716907 out.go:177]   - Using image docker.io/busybox:stable
	I0314 18:00:06.629232  716907 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 18:00:06.629255  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0314 18:00:06.629322  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:06.633838  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.634199  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.637949  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:06.651249  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	W0314 18:00:06.700034  716907 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0314 18:00:06.700074  716907 retry.go:31] will retry after 278.700323ms: ssh: handshake failed: EOF
	I0314 18:00:06.813465  716907 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 18:00:06.813608  716907 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:00:06.825361  716907 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0314 18:00:06.825389  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0314 18:00:07.015091  716907 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0314 18:00:07.015127  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0314 18:00:07.098273  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0314 18:00:07.104748  716907 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0314 18:00:07.104777  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0314 18:00:07.106115  716907 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0314 18:00:07.106137  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0314 18:00:07.111624  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:00:07.120680  716907 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 18:00:07.120778  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0314 18:00:07.121904  716907 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0314 18:00:07.121926  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0314 18:00:07.201145  716907 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0314 18:00:07.201172  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0314 18:00:07.205348  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 18:00:07.208647  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 18:00:07.209521  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 18:00:07.211913  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 18:00:07.316365  716907 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0314 18:00:07.316469  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0314 18:00:07.405590  716907 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0314 18:00:07.405628  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0314 18:00:07.415244  716907 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0314 18:00:07.415349  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0314 18:00:07.497965  716907 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0314 18:00:07.498078  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0314 18:00:07.500236  716907 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0314 18:00:07.500309  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0314 18:00:07.598075  716907 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0314 18:00:07.598213  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0314 18:00:07.599214  716907 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 18:00:07.599298  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 18:00:07.600924  716907 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0314 18:00:07.601006  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0314 18:00:07.621103  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:00:07.700226  716907 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0314 18:00:07.700259  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0314 18:00:07.813883  716907 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0314 18:00:07.813920  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0314 18:00:07.909261  716907 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0314 18:00:07.909382  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0314 18:00:07.999886  716907 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 18:00:07.999988  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 18:00:08.011054  716907 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0314 18:00:08.011145  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0314 18:00:08.014441  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0314 18:00:08.202832  716907 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0314 18:00:08.202929  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0314 18:00:08.207882  716907 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0314 18:00:08.207979  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0314 18:00:08.216376  716907 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0314 18:00:08.216483  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0314 18:00:08.299292  716907 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:00:08.299388  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0314 18:00:08.505366  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0314 18:00:08.506560  716907 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0314 18:00:08.506586  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0314 18:00:08.606754  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 18:00:08.699976  716907 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.886215015s)
	I0314 18:00:08.700138  716907 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.886455859s)
	I0314 18:00:08.700178  716907 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0314 18:00:08.702598  716907 node_ready.go:35] waiting up to 6m0s for node "addons-130663" to be "Ready" ...
	I0314 18:00:08.707567  716907 node_ready.go:49] node "addons-130663" has status "Ready":"True"
	I0314 18:00:08.707593  716907 node_ready.go:38] duration metric: took 4.81572ms for node "addons-130663" to be "Ready" ...
	I0314 18:00:08.707604  716907 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:00:08.717052  716907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8dzcl" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:08.801413  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:00:08.813578  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0314 18:00:08.905006  716907 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0314 18:00:08.905112  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0314 18:00:09.002497  716907 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0314 18:00:09.002592  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0314 18:00:09.207042  716907 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-130663" context rescaled to 1 replicas
	I0314 18:00:09.399827  716907 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 18:00:09.399923  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0314 18:00:09.608254  716907 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0314 18:00:09.608348  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0314 18:00:09.800404  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 18:00:10.502439  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.404054329s)
	I0314 18:00:10.505691  716907 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0314 18:00:10.505779  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0314 18:00:10.720357  716907 pod_ready.go:97] error getting pod "coredns-5dd5756b68-8dzcl" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-8dzcl" not found
	I0314 18:00:10.720397  716907 pod_ready.go:81] duration metric: took 2.003305107s for pod "coredns-5dd5756b68-8dzcl" in "kube-system" namespace to be "Ready" ...
	E0314 18:00:10.720412  716907 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-8dzcl" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-8dzcl" not found
	I0314 18:00:10.720421  716907 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nrxkn" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:11.018342  716907 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0314 18:00:11.018451  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0314 18:00:11.415894  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.210445242s)
	I0314 18:00:11.416045  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.207283513s)
	I0314 18:00:11.416066  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.304381624s)
	I0314 18:00:11.603207  716907 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0314 18:00:11.603240  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0314 18:00:11.710620  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.501014748s)
	I0314 18:00:12.103907  716907 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 18:00:12.104011  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0314 18:00:12.319612  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 18:00:12.803141  716907 pod_ready.go:102] pod "coredns-5dd5756b68-nrxkn" in "kube-system" namespace has status "Ready":"False"
	I0314 18:00:13.310551  716907 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0314 18:00:13.310728  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:13.332583  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:13.900383  716907 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0314 18:00:14.007002  716907 addons.go:234] Setting addon gcp-auth=true in "addons-130663"
	I0314 18:00:14.007137  716907 host.go:66] Checking if "addons-130663" exists ...
	I0314 18:00:14.007826  716907 cli_runner.go:164] Run: docker container inspect addons-130663 --format={{.State.Status}}
	I0314 18:00:14.029983  716907 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0314 18:00:14.030036  716907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-130663
	I0314 18:00:14.046475  716907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/addons-130663/id_rsa Username:docker}
	I0314 18:00:14.522031  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.310076762s)
	I0314 18:00:14.522080  716907 addons.go:470] Verifying addon ingress=true in "addons-130663"
	I0314 18:00:14.525007  716907 out.go:177] * Verifying ingress addon...
	I0314 18:00:14.522303  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.901166971s)
	I0314 18:00:14.522353  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.50781689s)
	I0314 18:00:14.522392  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.016995924s)
	I0314 18:00:14.522482  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.915624436s)
	I0314 18:00:14.522656  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.72119458s)
	I0314 18:00:14.522700  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.709080806s)
	I0314 18:00:14.522769  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.722272673s)
	I0314 18:00:14.526713  716907 addons.go:470] Verifying addon registry=true in "addons-130663"
	W0314 18:00:14.526741  716907 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 18:00:14.528338  716907 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-130663 service yakd-dashboard -n yakd-dashboard
	
	I0314 18:00:14.526776  716907 retry.go:31] will retry after 289.245196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 18:00:14.526717  716907 addons.go:470] Verifying addon metrics-server=true in "addons-130663"
	I0314 18:00:14.527641  716907 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0314 18:00:14.531661  716907 out.go:177] * Verifying registry addon...
	I0314 18:00:14.534081  716907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0314 18:00:14.535592  716907 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0314 18:00:14.535616  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:14.601692  716907 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0314 18:00:14.601722  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:14.820479  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:00:15.101386  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:15.107752  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:15.226429  716907 pod_ready.go:102] pod "coredns-5dd5756b68-nrxkn" in "kube-system" namespace has status "Ready":"False"
	I0314 18:00:15.600486  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:15.602353  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:16.037146  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:16.100340  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:16.535974  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:16.538079  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:16.831834  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.512094845s)
	I0314 18:00:16.831879  716907 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.801864041s)
	I0314 18:00:16.831888  716907 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-130663"
	I0314 18:00:16.834602  716907 out.go:177] * Verifying csi-hostpath-driver addon...
	I0314 18:00:16.832097  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.011578924s)
	I0314 18:00:16.836000  716907 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:00:16.837658  716907 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0314 18:00:16.839254  716907 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0314 18:00:16.839278  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0314 18:00:16.836920  716907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0314 18:00:16.904539  716907 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0314 18:00:16.904563  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:16.920687  716907 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0314 18:00:16.920746  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0314 18:00:16.941207  716907 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 18:00:16.941447  716907 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0314 18:00:16.961312  716907 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 18:00:17.036935  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:17.039793  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:17.228675  716907 pod_ready.go:102] pod "coredns-5dd5756b68-nrxkn" in "kube-system" namespace has status "Ready":"False"
	I0314 18:00:17.344871  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:17.600744  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:17.606344  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:17.845748  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:18.037094  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:18.039970  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:18.299019  716907 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.337649111s)
	I0314 18:00:18.300094  716907 addons.go:470] Verifying addon gcp-auth=true in "addons-130663"
	I0314 18:00:18.302334  716907 out.go:177] * Verifying gcp-auth addon...
	I0314 18:00:18.304847  716907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0314 18:00:18.308801  716907 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0314 18:00:18.308825  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:18.344826  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:18.537092  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:18.540360  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:18.809533  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:18.846061  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:19.036304  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:19.039441  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:19.308935  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:19.345592  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:19.537068  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:19.538594  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:19.727367  716907 pod_ready.go:102] pod "coredns-5dd5756b68-nrxkn" in "kube-system" namespace has status "Ready":"False"
	I0314 18:00:19.808931  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:19.844855  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:20.036143  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:20.038205  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:20.309298  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:20.345991  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:20.536754  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:20.538298  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:20.808317  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:20.845593  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:21.036671  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:21.039252  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:21.309709  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:21.345441  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:21.536708  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:21.538874  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:21.727488  716907 pod_ready.go:92] pod "coredns-5dd5756b68-nrxkn" in "kube-system" namespace has status "Ready":"True"
	I0314 18:00:21.727514  716907 pod_ready.go:81] duration metric: took 11.007085101s for pod "coredns-5dd5756b68-nrxkn" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.727524  716907 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.732230  716907 pod_ready.go:92] pod "etcd-addons-130663" in "kube-system" namespace has status "Ready":"True"
	I0314 18:00:21.732251  716907 pod_ready.go:81] duration metric: took 4.72197ms for pod "etcd-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.732263  716907 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.736929  716907 pod_ready.go:92] pod "kube-apiserver-addons-130663" in "kube-system" namespace has status "Ready":"True"
	I0314 18:00:21.736954  716907 pod_ready.go:81] duration metric: took 4.68246ms for pod "kube-apiserver-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.736973  716907 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.741471  716907 pod_ready.go:92] pod "kube-controller-manager-addons-130663" in "kube-system" namespace has status "Ready":"True"
	I0314 18:00:21.741493  716907 pod_ready.go:81] duration metric: took 4.510321ms for pod "kube-controller-manager-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.741502  716907 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-plkpb" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.745445  716907 pod_ready.go:92] pod "kube-proxy-plkpb" in "kube-system" namespace has status "Ready":"True"
	I0314 18:00:21.745465  716907 pod_ready.go:81] duration metric: took 3.957067ms for pod "kube-proxy-plkpb" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.745473  716907 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:21.808511  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:21.845065  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:22.036803  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:22.038513  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:22.125642  716907 pod_ready.go:92] pod "kube-scheduler-addons-130663" in "kube-system" namespace has status "Ready":"True"
	I0314 18:00:22.125669  716907 pod_ready.go:81] duration metric: took 380.188992ms for pod "kube-scheduler-addons-130663" in "kube-system" namespace to be "Ready" ...
	I0314 18:00:22.125680  716907 pod_ready.go:38] duration metric: took 13.418064025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:00:22.125702  716907 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:00:22.125765  716907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:00:22.138339  716907 api_server.go:72] duration metric: took 15.693179888s to wait for apiserver process to appear ...
	I0314 18:00:22.138374  716907 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:00:22.138402  716907 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0314 18:00:22.142356  716907 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0314 18:00:22.143495  716907 api_server.go:141] control plane version: v1.28.4
	I0314 18:00:22.143519  716907 api_server.go:131] duration metric: took 5.138094ms to wait for apiserver health ...
	I0314 18:00:22.143528  716907 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:00:22.309151  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:22.330929  716907 system_pods.go:59] 19 kube-system pods found
	I0314 18:00:22.330969  716907 system_pods.go:61] "coredns-5dd5756b68-nrxkn" [b9ff6021-bf5e-46cf-b7d2-99495678232d] Running
	I0314 18:00:22.330978  716907 system_pods.go:61] "csi-hostpath-attacher-0" [48914c30-d56c-4bf4-8cb9-da8e7dc336b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0314 18:00:22.330985  716907 system_pods.go:61] "csi-hostpath-resizer-0" [785268bb-bf93-4b58-bc56-7c07ff0ae264] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0314 18:00:22.330993  716907 system_pods.go:61] "csi-hostpathplugin-vs7pr" [728c0a70-0fc2-41f2-a7cc-18f20183b24e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 18:00:22.331000  716907 system_pods.go:61] "etcd-addons-130663" [3914b7f7-253b-4c1e-818e-82129b6623a6] Running
	I0314 18:00:22.331004  716907 system_pods.go:61] "kindnet-xlkc9" [f9f8f159-aadf-48fa-be33-90e60c928af4] Running
	I0314 18:00:22.331008  716907 system_pods.go:61] "kube-apiserver-addons-130663" [2c4fbc84-f2a0-4c18-89cd-cabddca23e94] Running
	I0314 18:00:22.331011  716907 system_pods.go:61] "kube-controller-manager-addons-130663" [26e7d18e-c28f-4cd1-a137-3aeb26608797] Running
	I0314 18:00:22.331016  716907 system_pods.go:61] "kube-ingress-dns-minikube" [400e5242-09a5-4f70-9e2d-b1e1a67962b2] Running
	I0314 18:00:22.331019  716907 system_pods.go:61] "kube-proxy-plkpb" [b4a36e91-3658-4fac-9739-75f83bbba600] Running
	I0314 18:00:22.331023  716907 system_pods.go:61] "kube-scheduler-addons-130663" [16a3a80b-6e48-462c-b801-538521549f8d] Running
	I0314 18:00:22.331028  716907 system_pods.go:61] "metrics-server-69cf46c98-fj9bs" [12e34166-87d0-420e-838e-3330b6c08895] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 18:00:22.331035  716907 system_pods.go:61] "nvidia-device-plugin-daemonset-h2bht" [c23151de-1acc-465e-8784-a45c2aea26e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0314 18:00:22.331042  716907 system_pods.go:61] "registry-proxy-9glk4" [be4a57b2-95b6-432f-bb91-5c1fbf957c78] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0314 18:00:22.331051  716907 system_pods.go:61] "registry-rwp7h" [efd2d391-a563-4757-b063-7225ac417042] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0314 18:00:22.331057  716907 system_pods.go:61] "snapshot-controller-58dbcc7b99-6kgv4" [d05c53c6-b7c1-4b0f-b9d6-58ae73d2e946] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:00:22.331066  716907 system_pods.go:61] "snapshot-controller-58dbcc7b99-bvvb5" [cf92a438-60f0-4d81-8420-e50909ffc525] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:00:22.331073  716907 system_pods.go:61] "storage-provisioner" [06fae389-d187-4a14-b485-eebec71a0130] Running
	I0314 18:00:22.331078  716907 system_pods.go:61] "tiller-deploy-7b677967b9-lgjv8" [06a01b4c-26d3-4589-b5c2-583042a56221] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0314 18:00:22.331087  716907 system_pods.go:74] duration metric: took 187.5537ms to wait for pod list to return data ...
	I0314 18:00:22.331099  716907 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:00:22.345110  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:22.524649  716907 default_sa.go:45] found service account: "default"
	I0314 18:00:22.524676  716907 default_sa.go:55] duration metric: took 193.568244ms for default service account to be created ...
	I0314 18:00:22.524686  716907 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:00:22.536427  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:22.538677  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:22.730802  716907 system_pods.go:86] 19 kube-system pods found
	I0314 18:00:22.730838  716907 system_pods.go:89] "coredns-5dd5756b68-nrxkn" [b9ff6021-bf5e-46cf-b7d2-99495678232d] Running
	I0314 18:00:22.730850  716907 system_pods.go:89] "csi-hostpath-attacher-0" [48914c30-d56c-4bf4-8cb9-da8e7dc336b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0314 18:00:22.730860  716907 system_pods.go:89] "csi-hostpath-resizer-0" [785268bb-bf93-4b58-bc56-7c07ff0ae264] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0314 18:00:22.730871  716907 system_pods.go:89] "csi-hostpathplugin-vs7pr" [728c0a70-0fc2-41f2-a7cc-18f20183b24e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 18:00:22.730879  716907 system_pods.go:89] "etcd-addons-130663" [3914b7f7-253b-4c1e-818e-82129b6623a6] Running
	I0314 18:00:22.730886  716907 system_pods.go:89] "kindnet-xlkc9" [f9f8f159-aadf-48fa-be33-90e60c928af4] Running
	I0314 18:00:22.730892  716907 system_pods.go:89] "kube-apiserver-addons-130663" [2c4fbc84-f2a0-4c18-89cd-cabddca23e94] Running
	I0314 18:00:22.730899  716907 system_pods.go:89] "kube-controller-manager-addons-130663" [26e7d18e-c28f-4cd1-a137-3aeb26608797] Running
	I0314 18:00:22.730907  716907 system_pods.go:89] "kube-ingress-dns-minikube" [400e5242-09a5-4f70-9e2d-b1e1a67962b2] Running
	I0314 18:00:22.730917  716907 system_pods.go:89] "kube-proxy-plkpb" [b4a36e91-3658-4fac-9739-75f83bbba600] Running
	I0314 18:00:22.730923  716907 system_pods.go:89] "kube-scheduler-addons-130663" [16a3a80b-6e48-462c-b801-538521549f8d] Running
	I0314 18:00:22.730932  716907 system_pods.go:89] "metrics-server-69cf46c98-fj9bs" [12e34166-87d0-420e-838e-3330b6c08895] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 18:00:22.730951  716907 system_pods.go:89] "nvidia-device-plugin-daemonset-h2bht" [c23151de-1acc-465e-8784-a45c2aea26e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0314 18:00:22.730961  716907 system_pods.go:89] "registry-proxy-9glk4" [be4a57b2-95b6-432f-bb91-5c1fbf957c78] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0314 18:00:22.730970  716907 system_pods.go:89] "registry-rwp7h" [efd2d391-a563-4757-b063-7225ac417042] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0314 18:00:22.730984  716907 system_pods.go:89] "snapshot-controller-58dbcc7b99-6kgv4" [d05c53c6-b7c1-4b0f-b9d6-58ae73d2e946] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:00:22.730998  716907 system_pods.go:89] "snapshot-controller-58dbcc7b99-bvvb5" [cf92a438-60f0-4d81-8420-e50909ffc525] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:00:22.731011  716907 system_pods.go:89] "storage-provisioner" [06fae389-d187-4a14-b485-eebec71a0130] Running
	I0314 18:00:22.731022  716907 system_pods.go:89] "tiller-deploy-7b677967b9-lgjv8" [06a01b4c-26d3-4589-b5c2-583042a56221] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0314 18:00:22.731036  716907 system_pods.go:126] duration metric: took 206.342384ms to wait for k8s-apps to be running ...
	I0314 18:00:22.731050  716907 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:00:22.731110  716907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:00:22.742730  716907 system_svc.go:56] duration metric: took 11.6698ms WaitForService to wait for kubelet
	I0314 18:00:22.742760  716907 kubeadm.go:576] duration metric: took 16.297611784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:00:22.742788  716907 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:00:22.808203  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:22.844969  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:22.925214  716907 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0314 18:00:22.925246  716907 node_conditions.go:123] node cpu capacity is 8
	I0314 18:00:22.925260  716907 node_conditions.go:105] duration metric: took 182.468368ms to run NodePressure ...
	I0314 18:00:22.925270  716907 start.go:240] waiting for startup goroutines ...
	I0314 18:00:23.035974  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:23.038612  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:23.308580  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:23.345807  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:23.535572  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:23.541477  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:23.809216  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:23.845027  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:24.036529  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:24.038638  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:24.308974  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:24.344467  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:24.535610  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:24.538634  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:24.808964  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:24.844616  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:25.035831  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:25.038502  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:25.308744  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:25.345421  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:25.536370  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:25.538500  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:25.809057  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:25.844839  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:26.036601  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:26.040702  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:26.309151  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:26.344847  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:26.536185  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:26.538417  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:26.808595  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:26.844522  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:27.036424  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:27.038719  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:27.308867  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:27.344823  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:27.535802  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:27.538356  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:27.808334  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:27.845610  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:28.035728  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:28.038569  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:28.308779  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:28.344926  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:28.536144  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:28.538649  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:28.808955  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:28.844780  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:29.036473  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:29.038586  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:29.308975  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:29.345575  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:29.536260  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:29.539282  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:29.808745  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:29.844056  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:30.036438  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:30.038717  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:30.308835  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:30.345040  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:30.535861  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:30.538523  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:30.808803  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:30.844341  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:31.036970  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:31.038420  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:31.309002  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:31.345247  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:31.536202  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:31.538203  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:31.808109  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:31.844872  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:32.036121  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:32.038555  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:32.308656  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:32.345313  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:32.536604  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:32.538822  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:32.809070  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:32.844824  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:33.037006  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:33.038097  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:33.308307  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:33.345136  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:33.536267  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:33.538945  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:33.808844  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:33.844519  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:34.035835  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:34.041785  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:34.308899  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:34.344686  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:34.536378  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:34.538250  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:34.809086  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:34.844945  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:35.036306  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:35.038074  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:35.308447  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:35.345004  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:35.536170  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:35.538322  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:35.808362  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:35.844823  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:36.036365  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:36.038446  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:36.308505  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:36.345298  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:36.536482  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:36.538610  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:36.808590  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:36.844457  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:37.035792  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:37.038751  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:37.308706  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:37.344636  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:37.535715  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:37.539222  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:37.808178  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:37.844736  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:38.037116  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:38.038872  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:38.308744  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:38.344099  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:38.536426  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:38.538530  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:38.808715  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:38.844146  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:39.036708  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:39.038378  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:39.308773  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:39.345395  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:39.536325  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:39.538186  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:39.808162  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:39.845493  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:40.036826  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:40.038950  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:40.309313  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:40.345015  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:40.536202  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:40.537940  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:40.809118  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:40.846114  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:41.039141  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:41.040714  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:41.309674  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:41.346040  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:41.536503  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:41.539297  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:41.808885  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:41.845428  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:42.036300  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:42.038260  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:42.308543  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:42.345452  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:42.535895  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:42.538836  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:42.809002  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:42.844698  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:43.036191  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:43.038063  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:43.308247  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:43.344687  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:43.535548  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:43.538454  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:43.808886  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:43.843953  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:44.037010  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:44.038899  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:44.308740  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:44.344445  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:44.537207  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:44.538912  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:44.808849  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:44.844293  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:45.036826  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:45.039083  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:45.309030  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:45.345165  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:45.536690  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:45.538961  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:45.808884  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:45.844095  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:46.036974  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:46.041814  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:46.309181  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:46.344661  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:46.536482  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:46.538101  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:46.808246  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:46.844802  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:47.036500  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:47.038375  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:47.309017  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:47.345659  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:47.537058  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:47.539260  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:47.809202  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:47.844133  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:48.036154  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:48.038611  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:48.310121  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:48.345580  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:48.536144  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:48.539126  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:48.809749  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:48.845820  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:49.036606  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:49.038501  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:49.312166  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:49.345284  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:49.536281  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:49.538472  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:49.829011  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:49.977798  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:50.042838  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:50.044112  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:50.308060  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:50.344668  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:50.536434  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:50.538468  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:50.809082  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:50.845447  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:51.036617  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:51.038826  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:51.309079  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:51.344865  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:51.536184  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:51.538462  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:51.808755  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:51.844632  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:52.036170  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:52.039559  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:52.309172  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:52.345553  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:52.607826  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:52.608523  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:52.808918  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:52.844825  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:53.036563  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:53.038689  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:53.308710  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:53.344266  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:53.536814  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:53.539185  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:53.810079  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:53.847195  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:54.036637  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:54.038574  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:54.308978  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:54.345461  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:54.537240  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:54.539378  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:54.808837  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:54.844328  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:55.036982  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:55.039215  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:55.308739  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:55.345001  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:55.536542  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:55.538627  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:55.809264  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:55.846009  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:56.036538  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:56.038467  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:56.309458  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:56.345697  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:56.536737  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:56.538755  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:56.809031  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:56.845728  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:57.035778  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:57.038546  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:57.308830  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:57.345514  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:57.537185  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:57.539333  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:57.808493  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:57.845585  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:58.036623  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:58.038709  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:58.308594  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:58.345712  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:58.536014  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:58.538887  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:58.809372  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:58.848920  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:59.036425  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:59.039176  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:59.309896  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:59.345546  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:00:59.536948  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:00:59.539134  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:00:59.808996  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:00:59.845835  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:00.036560  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:00.039163  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:00.309103  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:00.344757  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:00.536227  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:00.538293  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:00.808593  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:00.845283  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:01.037409  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:01.039236  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:01.308414  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:01.344746  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:01.537062  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:01.539012  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:01.809855  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:01.845360  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:02.037021  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:02.038761  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:02.309403  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:02.345020  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:02.536224  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:02.538778  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:02.809045  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:02.844988  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:03.035793  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:03.038236  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:03.308793  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:03.345876  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:03.536395  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:03.539627  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:03.808827  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:03.845175  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:04.036521  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:04.039441  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:04.308305  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:04.345788  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:04.537166  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:04.539702  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:04.809313  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:04.845792  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:05.035929  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:05.039345  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:05.308476  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:05.345316  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:05.537256  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:05.539390  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:05.808389  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:05.845231  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:06.036939  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:06.038716  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:06.309013  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:06.347645  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:06.536422  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:06.539030  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:06.842624  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:06.845845  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:07.037106  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:07.107010  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:01:07.451315  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:07.454140  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:07.536102  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:07.539407  716907 kapi.go:107] duration metric: took 53.005323576s to wait for kubernetes.io/minikube-addons=registry ...
	I0314 18:01:07.822837  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:07.846155  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:08.057956  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:08.308679  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:08.345413  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:08.656422  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:08.808744  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:08.844357  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:09.036414  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:09.309091  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:09.345107  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:09.536192  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:09.810043  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:09.846213  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:10.036652  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:10.309396  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:10.346817  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:10.536507  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:10.809104  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:10.845854  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:11.036531  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:11.309001  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:11.345784  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:11.537086  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:11.809672  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:11.972241  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:12.182475  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:12.308705  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:12.344418  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:12.535999  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:12.809029  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:12.844271  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:13.037104  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:13.308671  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:13.345301  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:13.536439  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:13.808281  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:13.846099  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:14.037119  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:14.309287  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:14.345827  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:14.536513  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:14.808156  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:14.844949  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:15.035725  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:15.308715  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:15.346002  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:15.536319  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:15.809780  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:15.845486  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:16.037419  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:16.308904  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:16.344217  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:16.535654  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:16.808805  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:16.846182  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:17.036736  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:17.308593  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:17.346579  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:17.535630  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:17.808787  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:17.845654  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:18.036726  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:18.308713  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:18.344343  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:18.537196  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:18.809649  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:18.902544  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:19.036350  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:19.308959  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:19.345254  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:19.536599  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:19.808257  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:19.845966  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:20.036591  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:20.308642  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:20.345961  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:20.536435  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:20.809795  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:20.845019  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:21.035981  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:21.309639  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:21.345914  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:21.536038  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:21.809546  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:21.846054  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:22.036418  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:22.310757  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:22.346866  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:22.537153  716907 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:01:22.809018  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:22.845793  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:23.038756  716907 kapi.go:107] duration metric: took 1m8.51111281s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0314 18:01:23.309275  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:23.347655  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:23.809110  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:23.844425  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:01:24.308719  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:24.346341  716907 kapi.go:107] duration metric: took 1m7.509413343s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0314 18:01:24.809048  716907 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:01:25.308494  716907 kapi.go:107] duration metric: took 1m7.003642308s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0314 18:01:25.310198  716907 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-130663 cluster.
	I0314 18:01:25.311590  716907 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0314 18:01:25.313074  716907 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0314 18:01:25.314639  716907 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, helm-tiller, yakd, metrics-server, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0314 18:01:25.315886  716907 addons.go:505] duration metric: took 1m18.870689542s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher inspektor-gadget helm-tiller yakd metrics-server default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0314 18:01:25.315941  716907 start.go:245] waiting for cluster config update ...
	I0314 18:01:25.315969  716907 start.go:254] writing updated cluster config ...
	I0314 18:01:25.316378  716907 ssh_runner.go:195] Run: rm -f paused
	I0314 18:01:25.366978  716907 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:01:25.369295  716907 out.go:177] * Done! kubectl is now configured to use "addons-130663" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	aa9792f23f80a       beae173ccac6a       3 seconds ago        Exited              registry-test                            0                   25f981b38e8b2       registry-test
	54b069cf1df82       81f48f8d24e42       5 seconds ago        Exited              gadget                                   2                   011715ae02d67       gadget-945jd
	c57a9cae8ce0c       db2fc13d44d50       15 seconds ago       Running             gcp-auth                                 0                   e056c0a0cda85       gcp-auth-7d69788767-c8gfj
	5d5394865d018       738351fd438f0       16 seconds ago       Running             csi-snapshotter                          0                   9f9c36dd11a73       csi-hostpathplugin-vs7pr
	0cd2021300c93       ffcc66479b5ba       17 seconds ago       Running             controller                               0                   e01a7f5d39119       ingress-nginx-controller-76dc478dd8-97467
	214aab11d3dbc       931dbfd16f87c       20 seconds ago       Running             csi-provisioner                          0                   9f9c36dd11a73       csi-hostpathplugin-vs7pr
	8fbe53bbf5b3a       e899260153aed       21 seconds ago       Running             liveness-probe                           0                   9f9c36dd11a73       csi-hostpathplugin-vs7pr
	f1b18a534ca5d       e255e073c508c       22 seconds ago       Running             hostpath                                 0                   9f9c36dd11a73       csi-hostpathplugin-vs7pr
	ca72f607e202b       88ef14a257f42       23 seconds ago       Running             node-driver-registrar                    0                   9f9c36dd11a73       csi-hostpathplugin-vs7pr
	7b0d2ca6870db       b29d748098e32       28 seconds ago       Exited              patch                                    0                   75388e5de410b       ingress-nginx-admission-patch-vksj4
	eedcb9b32a85d       aa61ee9c70bc4       28 seconds ago       Running             volume-snapshot-controller               0                   7b1c20e2485a9       snapshot-controller-58dbcc7b99-bvvb5
	bd892239b2ff2       31de47c733c91       28 seconds ago       Running             yakd                                     0                   47448b4be2a0f       yakd-dashboard-9947fc6bf-fnsss
	abb9631bb67ea       59cbb42146a37       31 seconds ago       Running             csi-attacher                             0                   da5ae0eb96137       csi-hostpath-attacher-0
	ec712ce6dca9b       b29d748098e32       33 seconds ago       Exited              create                                   0                   ed5d7bbd2538c       ingress-nginx-admission-create-klrjw
	83924663d2290       a8781fe3b7a28       33 seconds ago       Running             registry                                 0                   05c33f6999ef4       registry-rwp7h
	d604695744cd3       e16d1e3a10667       36 seconds ago       Running             local-path-provisioner                   0                   00a9e2fa3eb28       local-path-provisioner-78b46b4d5c-q4vq9
	3e386ec090038       35eab485356b4       37 seconds ago       Running             cloud-spanner-emulator                   0                   250cd8096a098       cloud-spanner-emulator-6548d5df46-szcg6
	01019d218b30e       d2fd211e7dcaa       39 seconds ago       Running             registry-proxy                           0                   d74df5e46f2e4       registry-proxy-9glk4
	5d26828e50f02       aa61ee9c70bc4       41 seconds ago       Running             volume-snapshot-controller               0                   7a672ba890a2a       snapshot-controller-58dbcc7b99-6kgv4
	d217baa2aca03       b29d748098e32       47 seconds ago       Exited              patch                                    0                   c34a98f43732c       gcp-auth-certs-patch-r2kbr
	20a599186d3d6       b29d748098e32       47 seconds ago       Exited              create                                   0                   8f1ffbb35361d       gcp-auth-certs-create-w7z9d
	5301f05c5e491       a1ed5895ba635       48 seconds ago       Running             csi-external-health-monitor-controller   0                   9f9c36dd11a73       csi-hostpathplugin-vs7pr
	4e62fbc28777f       19a639eda60f0       50 seconds ago       Running             csi-resizer                              0                   60950ec0342e6       csi-hostpath-resizer-0
	e64f6d8fe245c       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   1a2c8dbb9ad7e       coredns-5dd5756b68-nrxkn
	eb94701666855       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   8a1e2a1a1ade8       kube-ingress-dns-minikube
	5bf5073f06a33       4950bb10b3f87       About a minute ago   Running             kindnet-cni                              0                   0db11107d5a5d       kindnet-xlkc9
	8449e2b444d12       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   595f84aee8613       storage-provisioner
	2534513c00415       83f6cc407eed8       About a minute ago   Running             kube-proxy                               0                   3ef66c0cb5654       kube-proxy-plkpb
	c118ead8adeab       73deb9a3f7025       About a minute ago   Running             etcd                                     0                   af71b18fc0c24       etcd-addons-130663
	fc05e01d28fbf       e3db313c6dbc0       About a minute ago   Running             kube-scheduler                           0                   73b2d67ffc678       kube-scheduler-addons-130663
	9859a64a8c513       7fe0e6f37db33       About a minute ago   Running             kube-apiserver                           0                   0144cf24097dd       kube-apiserver-addons-130663
	bce11c7af1262       d058aa5ab969c       About a minute ago   Running             kube-controller-manager                  0                   f67cb0809b9b0       kube-controller-manager-addons-130663
	
	
	==> containerd <==
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.078245460Z" level=info msg="TearDown network for sandbox \"25f981b38e8b2a8421e2838e89cfd2b0d8ec4fe9e2299bff8a42875e0b161d78\" successfully"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.078304021Z" level=info msg="StopPodSandbox for \"25f981b38e8b2a8421e2838e89cfd2b0d8ec4fe9e2299bff8a42875e0b161d78\" returns successfully"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.609781427Z" level=info msg="shim disconnected" id=66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.609858393Z" level=warning msg="cleaning up after shim disconnected" id=66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba namespace=k8s.io
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.609871398Z" level=info msg="cleaning up dead shim"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.619449628Z" level=warning msg="cleanup warnings time=\"2024-03-14T18:01:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8074 runtime=io.containerd.runc.v2\n"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.621940874Z" level=info msg="StopContainer for \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\" returns successfully"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.622472179Z" level=info msg="StopPodSandbox for \"541b66f9553fc59b70aab9c1edb9812cd285d18b1f6f3f7d67a8447746781ae6\""
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.622554931Z" level=info msg="Container to stop \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.664965013Z" level=info msg="shim disconnected" id=541b66f9553fc59b70aab9c1edb9812cd285d18b1f6f3f7d67a8447746781ae6
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.665254253Z" level=warning msg="cleaning up after shim disconnected" id=541b66f9553fc59b70aab9c1edb9812cd285d18b1f6f3f7d67a8447746781ae6 namespace=k8s.io
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.665412447Z" level=info msg="cleaning up dead shim"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.703250930Z" level=warning msg="cleanup warnings time=\"2024-03-14T18:01:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8107 runtime=io.containerd.runc.v2\n"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.742312879Z" level=info msg="TearDown network for sandbox \"541b66f9553fc59b70aab9c1edb9812cd285d18b1f6f3f7d67a8447746781ae6\" successfully"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.742377551Z" level=info msg="StopPodSandbox for \"541b66f9553fc59b70aab9c1edb9812cd285d18b1f6f3f7d67a8447746781ae6\" returns successfully"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.759059332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:headlamp-5485c556b-j9qgd,Uid:59da228d-77b8-40b7-9475-e0a6198d09fa,Namespace:headlamp,Attempt:0,}"
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.792885461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.793002219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.793012720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.793255201Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8c1a586af0b795f8b12861b82fdf456561d954cd2a46dac6efab0b1a65e0188 pid=8174 runtime=io.containerd.runc.v2
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.840318428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:headlamp-5485c556b-j9qgd,Uid:59da228d-77b8-40b7-9475-e0a6198d09fa,Namespace:headlamp,Attempt:0,} returns sandbox id \"f8c1a586af0b795f8b12861b82fdf456561d954cd2a46dac6efab0b1a65e0188\""
	Mar 14 18:01:38 addons-130663 containerd[806]: time="2024-03-14T18:01:38.842445410Z" level=info msg="PullImage \"ghcr.io/headlamp-k8s/headlamp:v0.23.0@sha256:94e00732e1b43057a9135dafc7483781aea4a73a26cec449ed19f4d8794308d5\""
	Mar 14 18:01:39 addons-130663 containerd[806]: time="2024-03-14T18:01:39.017015457Z" level=info msg="RemoveContainer for \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\""
	Mar 14 18:01:39 addons-130663 containerd[806]: time="2024-03-14T18:01:39.022912142Z" level=info msg="RemoveContainer for \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\" returns successfully"
	Mar 14 18:01:39 addons-130663 containerd[806]: time="2024-03-14T18:01:39.023413760Z" level=error msg="ContainerStatus for \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\": not found"
	
	
	==> coredns [e64f6d8fe245c10b8e6528e051501224fa346dd400e95b567932d9ee532bbcd3] <==
	[INFO] 10.244.0.9:56192 - 25994 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083878s
	[INFO] 10.244.0.9:54365 - 62295 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004788408s
	[INFO] 10.244.0.9:54365 - 49736 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006585245s
	[INFO] 10.244.0.9:32873 - 62147 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005260286s
	[INFO] 10.244.0.9:32873 - 46784 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006489962s
	[INFO] 10.244.0.9:55376 - 29651 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00610089s
	[INFO] 10.244.0.9:55376 - 1711 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00654947s
	[INFO] 10.244.0.9:52971 - 2491 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074994s
	[INFO] 10.244.0.9:52971 - 41662 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103454s
	[INFO] 10.244.0.21:51069 - 52813 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00019071s
	[INFO] 10.244.0.21:33911 - 48270 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284233s
	[INFO] 10.244.0.21:50101 - 57353 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094698s
	[INFO] 10.244.0.21:60886 - 10405 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126438s
	[INFO] 10.244.0.21:47911 - 54436 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117553s
	[INFO] 10.244.0.21:58232 - 14146 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168235s
	[INFO] 10.244.0.21:50135 - 5765 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.009595379s
	[INFO] 10.244.0.21:51427 - 9340 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.013718855s
	[INFO] 10.244.0.21:60335 - 35746 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.01015638s
	[INFO] 10.244.0.21:59182 - 51634 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.01043446s
	[INFO] 10.244.0.21:58731 - 64756 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006409211s
	[INFO] 10.244.0.21:53861 - 57293 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006882764s
	[INFO] 10.244.0.21:53488 - 24062 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000793067s
	[INFO] 10.244.0.21:45282 - 424 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000988403s
	[INFO] 10.244.0.23:53737 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174107s
	[INFO] 10.244.0.23:46418 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00013676s
	
	
	==> describe nodes <==
	Name:               addons-130663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-130663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=addons-130663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T17_59_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-130663
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-130663"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 17:59:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-130663
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:01:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:01:25 +0000   Thu, 14 Mar 2024 17:59:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:01:25 +0000   Thu, 14 Mar 2024 17:59:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:01:25 +0000   Thu, 14 Mar 2024 17:59:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:01:25 +0000   Thu, 14 Mar 2024 18:00:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-130663
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 95fe4ac2927647d9afe4cd67ff13a238
	  System UUID:                72322021-8c12-4139-8d71-1383ac7cceda
	  Boot ID:                    87de36c9-cb7e-4733-a847-a433aa55bac2
	  Kernel Version:             5.15.0-1053-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-szcg6      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  gadget                      gadget-945jd                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  gcp-auth                    gcp-auth-7d69788767-c8gfj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  headlamp                    headlamp-5485c556b-j9qgd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-97467    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         85s
	  kube-system                 coredns-5dd5756b68-nrxkn                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     93s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 csi-hostpathplugin-vs7pr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 etcd-addons-130663                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         106s
	  kube-system                 kindnet-xlkc9                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      93s
	  kube-system                 kube-apiserver-addons-130663                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-controller-manager-addons-130663        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-plkpb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-scheduler-addons-130663                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 registry-proxy-9glk4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 registry-rwp7h                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 snapshot-controller-58dbcc7b99-6kgv4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 snapshot-controller-58dbcc7b99-bvvb5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  local-path-storage          local-path-provisioner-78b46b4d5c-q4vq9      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-fnsss               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             438Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 90s   kube-proxy       
	  Normal  Starting                 106s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s  kubelet          Node addons-130663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s  kubelet          Node addons-130663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s  kubelet          Node addons-130663 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             106s  kubelet          Node addons-130663 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  106s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                96s   kubelet          Node addons-130663 status is now: NodeReady
	  Normal  RegisteredNode           93s   node-controller  Node addons-130663 event: Registered Node addons-130663 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +1.027563] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000007] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  -0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000006] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +0.000014] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000004] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +2.015775] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000001] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +0.000003] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +0.003950] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000007] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +4.127607] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000006] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +0.000010] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000008] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +8.187115] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000008] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +0.000029] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000005] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	[  +0.000019] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-d17dacb7b79e
	[  +0.000005] ll header: 00000000: 02 42 e9 85 b9 7f 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [c118ead8adeab16d2595bf040959a61dd96a2340f5f7fbef3f105fa4fcd6854e] <==
	{"level":"info","ts":"2024-03-14T18:01:07.447615Z","caller":"traceutil/trace.go:171","msg":"trace[665511772] transaction","detail":"{read_only:false; response_revision:1042; number_of_response:1; }","duration":"204.580439ms","start":"2024-03-14T18:01:07.243015Z","end":"2024-03-14T18:01:07.447595Z","steps":["trace[665511772] 'process raft request'  (duration: 204.498027ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:01:07.447639Z","caller":"traceutil/trace.go:171","msg":"trace[798864454] transaction","detail":"{read_only:false; response_revision:1040; number_of_response:1; }","duration":"205.783012ms","start":"2024-03-14T18:01:07.241842Z","end":"2024-03-14T18:01:07.447625Z","steps":["trace[798864454] 'process raft request'  (duration: 166.284617ms)","trace[798864454] 'compare'  (duration: 39.213621ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T18:01:07.447891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.716376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10836"}
	{"level":"info","ts":"2024-03-14T18:01:07.44851Z","caller":"traceutil/trace.go:171","msg":"trace[1511223504] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1043; }","duration":"141.343154ms","start":"2024-03-14T18:01:07.307153Z","end":"2024-03-14T18:01:07.448496Z","steps":["trace[1511223504] 'agreement among raft nodes before linearized reading'  (duration: 140.659058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:01:07.448087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.800659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88073"}
	{"level":"info","ts":"2024-03-14T18:01:07.449202Z","caller":"traceutil/trace.go:171","msg":"trace[448997763] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1043; }","duration":"107.921079ms","start":"2024-03-14T18:01:07.341268Z","end":"2024-03-14T18:01:07.449189Z","steps":["trace[448997763] 'agreement among raft nodes before linearized reading'  (duration: 106.588263ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:01:08.65384Z","caller":"traceutil/trace.go:171","msg":"trace[1531798637] linearizableReadLoop","detail":"{readStateIndex:1079; appliedIndex:1078; }","duration":"119.644636ms","start":"2024-03-14T18:01:08.534176Z","end":"2024-03-14T18:01:08.65382Z","steps":["trace[1531798637] 'read index received'  (duration: 119.505651ms)","trace[1531798637] 'applied index is now lower than readState.Index'  (duration: 138.225µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T18:01:08.653922Z","caller":"traceutil/trace.go:171","msg":"trace[1676841705] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"124.515868ms","start":"2024-03-14T18:01:08.529376Z","end":"2024-03-14T18:01:08.653892Z","steps":["trace[1676841705] 'process raft request'  (duration: 124.290471ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:01:08.654038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.863139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13505"}
	{"level":"info","ts":"2024-03-14T18:01:08.654078Z","caller":"traceutil/trace.go:171","msg":"trace[1609741610] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1048; }","duration":"119.918503ms","start":"2024-03-14T18:01:08.534149Z","end":"2024-03-14T18:01:08.654068Z","steps":["trace[1609741610] 'agreement among raft nodes before linearized reading'  (duration: 119.759737ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:01:11.965474Z","caller":"traceutil/trace.go:171","msg":"trace[1297316494] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"151.87582ms","start":"2024-03-14T18:01:11.81357Z","end":"2024-03-14T18:01:11.965446Z","steps":["trace[1297316494] 'process raft request'  (duration: 72.93509ms)","trace[1297316494] 'compare'  (duration: 78.760597ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T18:01:11.965473Z","caller":"traceutil/trace.go:171","msg":"trace[512994866] transaction","detail":"{read_only:false; response_revision:1073; number_of_response:1; }","duration":"115.325388ms","start":"2024-03-14T18:01:11.850132Z","end":"2024-03-14T18:01:11.965457Z","steps":["trace[512994866] 'process raft request'  (duration: 115.259367ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:01:11.965488Z","caller":"traceutil/trace.go:171","msg":"trace[1223417157] linearizableReadLoop","detail":"{readStateIndex:1103; appliedIndex:1102; }","duration":"123.510117ms","start":"2024-03-14T18:01:11.84196Z","end":"2024-03-14T18:01:11.965471Z","steps":["trace[1223417157] 'read index received'  (duration: 44.543995ms)","trace[1223417157] 'applied index is now lower than readState.Index'  (duration: 78.963905ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T18:01:11.967146Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.127531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88135"}
	{"level":"info","ts":"2024-03-14T18:01:11.967214Z","caller":"traceutil/trace.go:171","msg":"trace[335442892] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1073; }","duration":"125.266341ms","start":"2024-03-14T18:01:11.841927Z","end":"2024-03-14T18:01:11.967194Z","steps":["trace[335442892] 'agreement among raft nodes before linearized reading'  (duration: 123.588208ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:01:12.17911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.535422ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128027824952381063 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/yakd-dashboard/yakd-dashboard\" mod_revision:577 > success:<request_put:<key:\"/registry/services/endpoints/yakd-dashboard/yakd-dashboard\" value_size:784 >> failure:<request_range:<key:\"/registry/services/endpoints/yakd-dashboard/yakd-dashboard\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T18:01:12.179292Z","caller":"traceutil/trace.go:171","msg":"trace[437289578] linearizableReadLoop","detail":"{readStateIndex:1106; appliedIndex:1104; }","duration":"207.574089ms","start":"2024-03-14T18:01:11.971703Z","end":"2024-03-14T18:01:12.179277Z","steps":["trace[437289578] 'read index received'  (duration: 102.336604ms)","trace[437289578] 'applied index is now lower than readState.Index'  (duration: 105.236663ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T18:01:12.179331Z","caller":"traceutil/trace.go:171","msg":"trace[74117251] transaction","detail":"{read_only:false; response_revision:1074; number_of_response:1; }","duration":"208.563124ms","start":"2024-03-14T18:01:11.970721Z","end":"2024-03-14T18:01:12.179285Z","steps":["trace[74117251] 'process raft request'  (duration: 103.33843ms)","trace[74117251] 'compare'  (duration: 104.44096ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T18:01:12.179342Z","caller":"traceutil/trace.go:171","msg":"trace[2139407514] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"208.243899ms","start":"2024-03-14T18:01:11.971074Z","end":"2024-03-14T18:01:12.179318Z","steps":["trace[2139407514] 'process raft request'  (duration: 208.131717ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:01:12.179415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.721938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-vksj4\" ","response":"range_response_count:1 size:3885"}
	{"level":"info","ts":"2024-03-14T18:01:12.179446Z","caller":"traceutil/trace.go:171","msg":"trace[1629707018] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-vksj4; range_end:; response_count:1; response_revision:1075; }","duration":"207.756086ms","start":"2024-03-14T18:01:11.971677Z","end":"2024-03-14T18:01:12.179433Z","steps":["trace[1629707018] 'agreement among raft nodes before linearized reading'  (duration: 207.688047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:01:12.179505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.513575ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-03-14T18:01:12.179534Z","caller":"traceutil/trace.go:171","msg":"trace[718849407] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:1075; }","duration":"177.548107ms","start":"2024-03-14T18:01:12.001977Z","end":"2024-03-14T18:01:12.179525Z","steps":["trace[718849407] 'agreement among raft nodes before linearized reading'  (duration: 177.473967ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:01:12.179624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.455916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13591"}
	{"level":"info","ts":"2024-03-14T18:01:12.17965Z","caller":"traceutil/trace.go:171","msg":"trace[924250047] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1075; }","duration":"145.48678ms","start":"2024-03-14T18:01:12.034156Z","end":"2024-03-14T18:01:12.179643Z","steps":["trace[924250047] 'agreement among raft nodes before linearized reading'  (duration: 145.415734ms)"],"step_count":1}
	
	
	==> gcp-auth [c57a9cae8ce0cf34c0304448839e85be0328819a49bd4af2d5ab6347a6ce938f] <==
	2024/03/14 18:01:24 GCP Auth Webhook started!
	2024/03/14 18:01:30 Ready to marshal response ...
	2024/03/14 18:01:30 Ready to write response ...
	2024/03/14 18:01:35 Ready to marshal response ...
	2024/03/14 18:01:35 Ready to write response ...
	2024/03/14 18:01:38 Ready to marshal response ...
	2024/03/14 18:01:38 Ready to write response ...
	2024/03/14 18:01:38 Ready to marshal response ...
	2024/03/14 18:01:38 Ready to write response ...
	2024/03/14 18:01:38 Ready to marshal response ...
	2024/03/14 18:01:38 Ready to write response ...
	2024/03/14 18:01:40 Ready to marshal response ...
	2024/03/14 18:01:40 Ready to write response ...
	
	
	==> kernel <==
	 18:01:40 up  2:44,  0 users,  load average: 1.58, 2.02, 2.29
	Linux addons-130663 5.15.0-1053-gcp #61~20.04.1-Ubuntu SMP Mon Feb 26 16:50:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [5bf5073f06a334ca251af8bde4ae5eaf5d372c5dfdc4d5a45cdeb00bb387449d] <==
	I0314 18:00:16.700114       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 18:00:16.700193       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0314 18:00:16.700423       1 main.go:116] setting mtu 1500 for CNI 
	I0314 18:00:16.700445       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 18:00:16.700471       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 18:00:17.199404       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:00:17.199451       1 main.go:227] handling current node
	I0314 18:00:27.212938       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:00:27.212972       1 main.go:227] handling current node
	I0314 18:00:37.225298       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:00:37.225322       1 main.go:227] handling current node
	I0314 18:00:47.229789       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:00:47.229816       1 main.go:227] handling current node
	I0314 18:00:57.241470       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:00:57.241506       1 main.go:227] handling current node
	I0314 18:01:07.450113       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:01:07.450153       1 main.go:227] handling current node
	I0314 18:01:17.455293       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:01:17.455330       1 main.go:227] handling current node
	I0314 18:01:27.459100       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:01:27.459131       1 main.go:227] handling current node
	I0314 18:01:37.501019       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 18:01:37.501051       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9859a64a8c5138a39c5a402c3def6f7e75c7cbebbf3b67b0da17510ba6430cac] <==
	I0314 18:00:14.406756       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0314 18:00:14.833649       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 18:00:14.834088       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 18:00:15.006344       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 18:00:16.667738       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.111.255.112"}
	I0314 18:00:16.707300       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0314 18:00:16.799587       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.113.216"}
	W0314 18:00:17.238903       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 18:00:18.108000       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.13.124"}
	I0314 18:00:50.452141       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0314 18:01:07.237713       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.12.26:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.12.26:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.12.26:443: connect: connection refused
	W0314 18:01:07.238717       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 18:01:07.238803       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 18:01:07.248465       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0314 18:01:07.454053       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0314 18:01:07.464410       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0314 18:01:33.743040       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.22:60544: read: connection reset by peer
	E0314 18:01:38.100224       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc009febc50), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc0081b67d0), ResponseWriter:(*httpsnoop.rw)(0xc0081b67d0), Flusher:(*httpsnoop.rw)(0xc0081b67d0), CloseNotifier:(*httpsnoop.rw)(0xc0081b67d0), Pusher:(*httpsnoop.rw)(0xc0081b67d0)}}, encoder:(*versioning.codec)(0xc00bef1720), memAllocator:(*runtime.Allocator)(0xc006cff338)})
	I0314 18:01:38.417551       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.87.143"}
	W0314 18:01:38.746397       1 dispatcher.go:217] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.102.100.205:443: connect: connection refused
	I0314 18:01:39.859869       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0314 18:01:40.103920       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.25.205"}
	
	
	==> kube-controller-manager [bce11c7af1262684844d7fb48e0808cc04b951161eb6b49a1919f2bb67771cc8] <==
	I0314 18:01:14.933165       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0314 18:01:15.915019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="5.602257ms"
	I0314 18:01:15.915169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="83.22µs"
	I0314 18:01:22.942110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="114.567µs"
	I0314 18:01:24.016641       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0314 18:01:24.017877       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0314 18:01:24.048085       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0314 18:01:24.049205       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0314 18:01:24.139108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="8.976458ms"
	I0314 18:01:24.139225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="67.127µs"
	I0314 18:01:24.960219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="5.988692ms"
	I0314 18:01:24.960333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="65.948µs"
	I0314 18:01:29.763911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="4.64626ms"
	I0314 18:01:29.764054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="85.692µs"
	I0314 18:01:35.600538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="11.485µs"
	I0314 18:01:35.613092       1 event.go:307] "Event occurred" object="kube-system/tiller-deploy" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/tiller-deploy: Operation cannot be fulfilled on endpoints \"tiller-deploy\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/kube-system/tiller-deploy, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d544ae40-76ff-4279-bbaf-ea5482aaa4e6, UID in object meta: "
	I0314 18:01:37.431836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="8.507µs"
	I0314 18:01:38.436122       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-5485c556b to 1"
	I0314 18:01:38.448303       1 event.go:307] "Event occurred" object="headlamp/headlamp-5485c556b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-5485c556b-j9qgd"
	I0314 18:01:38.454030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="18.570297ms"
	I0314 18:01:38.509918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="55.828162ms"
	I0314 18:01:38.521823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="11.842103ms"
	I0314 18:01:38.521936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="67.108µs"
	I0314 18:01:38.651473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="9.430899ms"
	I0314 18:01:38.651618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="86.859µs"
	
	
	==> kube-proxy [2534513c004159ed3d55c12a45ecdb9a6b2b2952356dff17693a40173512238a] <==
	I0314 18:00:08.316738       1 server_others.go:69] "Using iptables proxy"
	I0314 18:00:08.512716       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0314 18:00:09.010545       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0314 18:00:09.018582       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:00:09.018634       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0314 18:00:09.018646       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0314 18:00:09.018684       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:00:09.018911       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:00:09.018922       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:00:09.099294       1 config.go:188] "Starting service config controller"
	I0314 18:00:09.099320       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:00:09.099346       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:00:09.099350       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:00:09.099604       1 config.go:315] "Starting node config controller"
	I0314 18:00:09.099612       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:00:09.200658       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:00:09.200727       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:00:09.200746       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fc05e01d28fbf39a12ae1091bb558c738ae21a01f23092fbff919370ec50a02f] <==
	W0314 17:59:50.712288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 17:59:50.712331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 17:59:50.712652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 17:59:50.712945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 17:59:50.713720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 17:59:50.714713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 17:59:50.714799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 17:59:50.713887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 17:59:50.714256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 17:59:50.714879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 17:59:50.714407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 17:59:50.714935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 17:59:50.714518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 17:59:50.714976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 17:59:50.715796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 17:59:50.715816       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 17:59:51.556261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 17:59:51.556306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 17:59:51.569638       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 17:59:51.569671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 17:59:51.599154       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 17:59:51.599192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 17:59:51.610771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 17:59:51.610799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 17:59:54.803625       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 18:01:38 addons-130663 kubelet[1620]: E0314 18:01:38.455637    1620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="06a01b4c-26d3-4589-b5c2-583042a56221" containerName="tiller"
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.455699    1620 memory_manager.go:346] "RemoveStaleState removing state" podUID="61f5f3df-6246-47a8-a439-506cf7dde3e5" containerName="registry-test"
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.455711    1620 memory_manager.go:346] "RemoveStaleState removing state" podUID="06a01b4c-26d3-4589-b5c2-583042a56221" containerName="tiller"
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.520411    1620 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcrmb\" (UniqueName: \"kubernetes.io/projected/59da228d-77b8-40b7-9475-e0a6198d09fa-kube-api-access-zcrmb\") pod \"headlamp-5485c556b-j9qgd\" (UID: \"59da228d-77b8-40b7-9475-e0a6198d09fa\") " pod="headlamp/headlamp-5485c556b-j9qgd"
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.520486    1620 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/59da228d-77b8-40b7-9475-e0a6198d09fa-gcp-creds\") pod \"headlamp-5485c556b-j9qgd\" (UID: \"59da228d-77b8-40b7-9475-e0a6198d09fa\") " pod="headlamp/headlamp-5485c556b-j9qgd"
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.924423    1620 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/12e34166-87d0-420e-838e-3330b6c08895-tmp-dir\") pod \"12e34166-87d0-420e-838e-3330b6c08895\" (UID: \"12e34166-87d0-420e-838e-3330b6c08895\") "
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.924486    1620 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tpmp\" (UniqueName: \"kubernetes.io/projected/12e34166-87d0-420e-838e-3330b6c08895-kube-api-access-6tpmp\") pod \"12e34166-87d0-420e-838e-3330b6c08895\" (UID: \"12e34166-87d0-420e-838e-3330b6c08895\") "
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.924764    1620 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e34166-87d0-420e-838e-3330b6c08895-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "12e34166-87d0-420e-838e-3330b6c08895" (UID: "12e34166-87d0-420e-838e-3330b6c08895"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Mar 14 18:01:38 addons-130663 kubelet[1620]: I0314 18:01:38.926843    1620 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e34166-87d0-420e-838e-3330b6c08895-kube-api-access-6tpmp" (OuterVolumeSpecName: "kube-api-access-6tpmp") pod "12e34166-87d0-420e-838e-3330b6c08895" (UID: "12e34166-87d0-420e-838e-3330b6c08895"). InnerVolumeSpecName "kube-api-access-6tpmp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.013474    1620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f981b38e8b2a8421e2838e89cfd2b0d8ec4fe9e2299bff8a42875e0b161d78"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.015408    1620 scope.go:117] "RemoveContainer" containerID="66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.023150    1620 scope.go:117] "RemoveContainer" containerID="66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: E0314 18:01:39.023642    1620 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\": not found" containerID="66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.023693    1620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba"} err="failed to get container status \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\": rpc error: code = NotFound desc = an error occurred when try to find container \"66609906e5ee88336a19688aa5746168fb4a075e154dffde3cbf6be9a97b2bba\": not found"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.024990    1620 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/12e34166-87d0-420e-838e-3330b6c08895-tmp-dir\") on node \"addons-130663\" DevicePath \"\""
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.025036    1620 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6tpmp\" (UniqueName: \"kubernetes.io/projected/12e34166-87d0-420e-838e-3330b6c08895-kube-api-access-6tpmp\") on node \"addons-130663\" DevicePath \"\""
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.317223    1620 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="12e34166-87d0-420e-838e-3330b6c08895" path="/var/lib/kubelet/pods/12e34166-87d0-420e-838e-3330b6c08895/volumes"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.317655    1620 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="61f5f3df-6246-47a8-a439-506cf7dde3e5" path="/var/lib/kubelet/pods/61f5f3df-6246-47a8-a439-506cf7dde3e5/volumes"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: I0314 18:01:39.520456    1620 scope.go:117] "RemoveContainer" containerID="54b069cf1df821da34dbb3737766784177abe465a36c554e37b29a97ca8f8c17"
	Mar 14 18:01:39 addons-130663 kubelet[1620]: E0314 18:01:39.521165    1620 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 20s restarting failed container=gadget pod=gadget-945jd_gadget(06ac6c50-891b-4cd5-8b4d-fca7aed2b1db)\"" pod="gadget/gadget-945jd" podUID="06ac6c50-891b-4cd5-8b4d-fca7aed2b1db"
	Mar 14 18:01:40 addons-130663 kubelet[1620]: I0314 18:01:40.021472    1620 topology_manager.go:215] "Topology Admit Handler" podUID="59267d2f-2206-471f-a566-7859cceb8e2c" podNamespace="default" podName="nginx"
	Mar 14 18:01:40 addons-130663 kubelet[1620]: E0314 18:01:40.021575    1620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12e34166-87d0-420e-838e-3330b6c08895" containerName="metrics-server"
	Mar 14 18:01:40 addons-130663 kubelet[1620]: I0314 18:01:40.021630    1620 memory_manager.go:346] "RemoveStaleState removing state" podUID="12e34166-87d0-420e-838e-3330b6c08895" containerName="metrics-server"
	Mar 14 18:01:40 addons-130663 kubelet[1620]: I0314 18:01:40.132359    1620 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/59267d2f-2206-471f-a566-7859cceb8e2c-gcp-creds\") pod \"nginx\" (UID: \"59267d2f-2206-471f-a566-7859cceb8e2c\") " pod="default/nginx"
	Mar 14 18:01:40 addons-130663 kubelet[1620]: I0314 18:01:40.132432    1620 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcx8s\" (UniqueName: \"kubernetes.io/projected/59267d2f-2206-471f-a566-7859cceb8e2c-kube-api-access-xcx8s\") pod \"nginx\" (UID: \"59267d2f-2206-471f-a566-7859cceb8e2c\") " pod="default/nginx"
	
	
	==> storage-provisioner [8449e2b444d12ae448f56bb9fed3106f7a505a132dfabadc8c2e56853e7fa81f] <==
	I0314 18:00:13.522089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 18:00:13.609213       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 18:00:13.609271       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 18:00:13.621384       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 18:00:13.621585       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-130663_218ab3bf-25c6-4581-98ba-ca4b9e6aa272!
	I0314 18:00:13.623032       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1256acb3-3fc2-45f1-b4ff-1667a415961d", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-130663_218ab3bf-25c6-4581-98ba-ca4b9e6aa272 became leader
	I0314 18:00:13.797538       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-130663_218ab3bf-25c6-4581-98ba-ca4b9e6aa272!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-130663 -n addons-130663
helpers_test.go:261: (dbg) Run:  kubectl --context addons-130663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx headlamp-5485c556b-j9qgd ingress-nginx-admission-create-klrjw ingress-nginx-admission-patch-vksj4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-130663 describe pod nginx headlamp-5485c556b-j9qgd ingress-nginx-admission-create-klrjw ingress-nginx-admission-patch-vksj4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-130663 describe pod nginx headlamp-5485c556b-j9qgd ingress-nginx-admission-create-klrjw ingress-nginx-admission-patch-vksj4: exit status 1 (106.950024ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-130663/192.168.49.2
	Start Time:       Thu, 14 Mar 2024 18:01:40 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xcx8s (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-xcx8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/nginx to addons-130663
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-5485c556b-j9qgd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-klrjw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vksj4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-130663 describe pod nginx headlamp-5485c556b-j9qgd ingress-nginx-admission-create-klrjw ingress-nginx-admission-patch-vksj4: exit status 1
--- FAIL: TestAddons/parallel/Registry (16.34s)

                                                
                                    

Test pass (308/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.37
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 5.01
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.22
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 6.94
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 1.24
30 TestBinaryMirror 0.73
31 TestOffline 62.07
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 129.39
39 TestAddons/parallel/Ingress 29.89
40 TestAddons/parallel/InspektorGadget 11.22
41 TestAddons/parallel/MetricsServer 5.67
42 TestAddons/parallel/HelmTiller 10.33
44 TestAddons/parallel/CSI 62.74
45 TestAddons/parallel/Headlamp 12.91
46 TestAddons/parallel/CloudSpanner 5.77
47 TestAddons/parallel/LocalPath 54.57
48 TestAddons/parallel/NvidiaDevicePlugin 6.51
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.14
53 TestAddons/StoppedEnableDisable 12.17
54 TestCertOptions 26.85
55 TestCertExpiration 217.48
57 TestForceSystemdFlag 29.64
58 TestForceSystemdEnv 38.83
59 TestDockerEnvContainerd 39.71
60 TestKVMDriverInstallOrUpdate 3.27
64 TestErrorSpam/setup 21.19
65 TestErrorSpam/start 0.62
66 TestErrorSpam/status 0.91
67 TestErrorSpam/pause 1.53
68 TestErrorSpam/unpause 1.59
69 TestErrorSpam/stop 1.91
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 48.03
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.03
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.06
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.33
81 TestFunctional/serial/CacheCmd/cache/add_local 1.5
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
89 TestFunctional/serial/ExtraConfig 42.58
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.45
92 TestFunctional/serial/LogsFileCmd 1.48
93 TestFunctional/serial/InvalidService 4.55
95 TestFunctional/parallel/ConfigCmd 0.5
96 TestFunctional/parallel/DashboardCmd 19.19
97 TestFunctional/parallel/DryRun 0.42
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.01
103 TestFunctional/parallel/ServiceCmdConnect 11.53
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 24.21
107 TestFunctional/parallel/SSHCmd 0.64
108 TestFunctional/parallel/CpCmd 1.84
109 TestFunctional/parallel/MySQL 22.99
110 TestFunctional/parallel/FileSync 0.37
111 TestFunctional/parallel/CertSync 2.04
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
119 TestFunctional/parallel/License 0.23
120 TestFunctional/parallel/Version/short 0.08
121 TestFunctional/parallel/Version/components 0.73
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
126 TestFunctional/parallel/ImageCommands/ImageBuild 4.02
127 TestFunctional/parallel/ImageCommands/Setup 1.12
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
134 TestFunctional/parallel/ProfileCmd/profile_list 0.48
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.53
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.28
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.52
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.93
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/ServiceCmd/DeployApp 6.39
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.86
150 TestFunctional/parallel/MountCmd/any-port 6.91
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.25
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.19
154 TestFunctional/parallel/ServiceCmd/List 0.66
155 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
157 TestFunctional/parallel/ServiceCmd/Format 0.4
158 TestFunctional/parallel/ServiceCmd/URL 0.4
159 TestFunctional/parallel/MountCmd/specific-port 1.91
160 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMutliControlPlane/serial/StartCluster 115.05
168 TestMutliControlPlane/serial/DeployApp 17.85
169 TestMutliControlPlane/serial/PingHostFromPods 1.18
170 TestMutliControlPlane/serial/AddWorkerNode 22.34
171 TestMutliControlPlane/serial/NodeLabels 0.06
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.65
173 TestMutliControlPlane/serial/CopyFile 16.77
174 TestMutliControlPlane/serial/StopSecondaryNode 12.55
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
176 TestMutliControlPlane/serial/RestartSecondaryNode 15.37
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.7
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 109.36
179 TestMutliControlPlane/serial/DeleteSecondaryNode 9.94
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.48
181 TestMutliControlPlane/serial/StopCluster 35.7
182 TestMutliControlPlane/serial/RestartCluster 68.06
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.48
184 TestMutliControlPlane/serial/AddSecondaryNode 42.02
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
189 TestJSONOutput/start/Command 49.99
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.66
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.6
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.69
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 32.73
215 TestKicCustomNetwork/use_default_bridge_network 27.48
216 TestKicExistingNetwork 26.94
217 TestKicCustomSubnet 25.59
218 TestKicStaticIP 28.23
219 TestMainNoArgs 0.07
220 TestMinikubeProfile 48.95
223 TestMountStart/serial/StartWithMountFirst 7.83
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 5.06
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.59
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.18
230 TestMountStart/serial/RestartStopped 6.77
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 66.54
235 TestMultiNode/serial/DeployApp2Nodes 3.51
236 TestMultiNode/serial/PingHostFrom2Pods 0.8
237 TestMultiNode/serial/AddNode 17.98
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.3
240 TestMultiNode/serial/CopyFile 9.48
241 TestMultiNode/serial/StopNode 2.14
242 TestMultiNode/serial/StartAfterStop 8.79
243 TestMultiNode/serial/RestartKeepsNodes 85.03
244 TestMultiNode/serial/DeleteNode 5.07
245 TestMultiNode/serial/StopMultiNode 23.75
246 TestMultiNode/serial/RestartMultiNode 51.71
247 TestMultiNode/serial/ValidateNameConflict 26.8
252 TestPreload 108.36
254 TestScheduledStopUnix 99.85
257 TestInsufficientStorage 13.14
258 TestRunningBinaryUpgrade 57.24
260 TestKubernetesUpgrade 402.91
261 TestMissingContainerUpgrade 143.22
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/StartWithK8s 33.25
272 TestNetworkPlugins/group/false 8.52
276 TestNoKubernetes/serial/StartWithStopK8s 15.92
277 TestNoKubernetes/serial/Start 6.96
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
279 TestNoKubernetes/serial/ProfileList 5.69
280 TestStoppedBinaryUpgrade/Setup 0.52
281 TestNoKubernetes/serial/Stop 1.29
282 TestStoppedBinaryUpgrade/Upgrade 136.77
283 TestNoKubernetes/serial/StartNoArgs 6.1
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
294 TestPause/serial/Start 52.96
295 TestNetworkPlugins/group/auto/Start 56.13
296 TestNetworkPlugins/group/kindnet/Start 54.42
297 TestPause/serial/SecondStartNoReconfiguration 5.35
298 TestPause/serial/Pause 0.72
299 TestPause/serial/VerifyStatus 0.34
300 TestPause/serial/Unpause 0.69
301 TestPause/serial/PauseAgain 0.84
302 TestPause/serial/DeletePaused 2.76
303 TestPause/serial/VerifyDeletedResources 0.8
304 TestNetworkPlugins/group/calico/Start 69.92
305 TestNetworkPlugins/group/auto/KubeletFlags 0.31
306 TestNetworkPlugins/group/auto/NetCatPod 9.22
307 TestNetworkPlugins/group/auto/DNS 0.18
308 TestNetworkPlugins/group/auto/Localhost 0.13
309 TestNetworkPlugins/group/auto/HairPin 0.11
310 TestNetworkPlugins/group/custom-flannel/Start 56.76
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
313 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
314 TestNetworkPlugins/group/kindnet/DNS 0.16
315 TestNetworkPlugins/group/kindnet/Localhost 0.14
316 TestNetworkPlugins/group/kindnet/HairPin 0.13
317 TestNetworkPlugins/group/enable-default-cni/Start 78.44
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.28
320 TestNetworkPlugins/group/calico/NetCatPod 9.18
321 TestNetworkPlugins/group/calico/DNS 0.13
322 TestNetworkPlugins/group/calico/Localhost 0.12
323 TestNetworkPlugins/group/calico/HairPin 0.12
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
326 TestNetworkPlugins/group/custom-flannel/DNS 0.16
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
329 TestNetworkPlugins/group/bridge/Start 79.09
330 TestNetworkPlugins/group/flannel/Start 52.64
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestStartStop/group/old-k8s-version/serial/FirstStart 147.44
339 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
340 TestNetworkPlugins/group/flannel/NetCatPod 9.98
342 TestStartStop/group/no-preload/serial/FirstStart 69.97
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
344 TestNetworkPlugins/group/bridge/NetCatPod 9.22
345 TestNetworkPlugins/group/flannel/DNS 0.22
346 TestNetworkPlugins/group/flannel/Localhost 0.19
347 TestNetworkPlugins/group/flannel/HairPin 0.17
348 TestNetworkPlugins/group/bridge/DNS 0.16
349 TestNetworkPlugins/group/bridge/Localhost 0.14
350 TestNetworkPlugins/group/bridge/HairPin 0.13
352 TestStartStop/group/embed-certs/serial/FirstStart 52.64
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54
355 TestStartStop/group/no-preload/serial/DeployApp 9.23
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
357 TestStartStop/group/no-preload/serial/Stop 11.91
358 TestStartStop/group/embed-certs/serial/DeployApp 8.25
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
360 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
361 TestStartStop/group/no-preload/serial/SecondStart 262.83
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
363 TestStartStop/group/embed-certs/serial/Stop 11.97
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
367 TestStartStop/group/embed-certs/serial/SecondStart 262.93
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.8
370 TestStartStop/group/old-k8s-version/serial/DeployApp 8.37
371 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
372 TestStartStop/group/old-k8s-version/serial/Stop 11.88
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
374 TestStartStop/group/old-k8s-version/serial/SecondStart 62.12
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 53.01
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
378 TestStartStop/group/old-k8s-version/serial/Pause 2.65
380 TestStartStop/group/newest-cni/serial/FirstStart 37.12
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
383 TestStartStop/group/newest-cni/serial/Stop 1.21
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
385 TestStartStop/group/newest-cni/serial/SecondStart 13.39
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/newest-cni/serial/Pause 2.66
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
393 TestStartStop/group/no-preload/serial/Pause 2.8
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
398 TestStartStop/group/embed-certs/serial/Pause 2.75
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.68
x
+
TestDownloadOnly/v1.20.0/json-events (6.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-765000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-765000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.367825961s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-765000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-765000: exit status 85 (80.687529ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-765000 | jenkins | v1.32.0 | 14 Mar 24 17:58 UTC |          |
	|         | -p download-only-765000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:58:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:58:53.944479  715480 out.go:291] Setting OutFile to fd 1 ...
	I0314 17:58:53.944779  715480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:58:53.944791  715480 out.go:304] Setting ErrFile to fd 2...
	I0314 17:58:53.944795  715480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:58:53.945011  715480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	W0314 17:58:53.945163  715480 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18384-708595/.minikube/config/config.json: open /home/jenkins/minikube-integration/18384-708595/.minikube/config/config.json: no such file or directory
	I0314 17:58:53.945820  715480 out.go:298] Setting JSON to true
	I0314 17:58:53.946784  715480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9685,"bootTime":1710429449,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 17:58:53.946861  715480 start.go:139] virtualization: kvm guest
	I0314 17:58:53.949659  715480 out.go:97] [download-only-765000] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W0314 17:58:53.949785  715480 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball: no such file or directory
	I0314 17:58:53.951387  715480 out.go:169] MINIKUBE_LOCATION=18384
	I0314 17:58:53.949884  715480 notify.go:220] Checking for updates...
	I0314 17:58:53.954711  715480 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 17:58:53.956483  715480 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 17:58:53.958055  715480 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	I0314 17:58:53.959510  715480 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0314 17:58:53.962257  715480 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 17:58:53.962580  715480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:58:53.985276  715480 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 17:58:53.985430  715480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:58:54.034570  715480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-14 17:58:54.025628718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:58:54.034689  715480 docker.go:295] overlay module found
	I0314 17:58:54.036539  715480 out.go:97] Using the docker driver based on user configuration
	I0314 17:58:54.036561  715480 start.go:297] selected driver: docker
	I0314 17:58:54.036566  715480 start.go:901] validating driver "docker" against <nil>
	I0314 17:58:54.036652  715480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:58:54.085757  715480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-14 17:58:54.076513783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:58:54.085941  715480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:58:54.086937  715480 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0314 17:58:54.087166  715480 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 17:58:54.089812  715480 out.go:169] Using Docker driver with root privileges
	I0314 17:58:54.091340  715480 cni.go:84] Creating CNI manager for ""
	I0314 17:58:54.091354  715480 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 17:58:54.091363  715480 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 17:58:54.091456  715480 start.go:340] cluster config:
	{Name:download-only-765000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 17:58:54.093036  715480 out.go:97] Starting "download-only-765000" primary control-plane node in "download-only-765000" cluster
	I0314 17:58:54.093076  715480 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 17:58:54.094481  715480 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 17:58:54.094519  715480 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0314 17:58:54.094619  715480 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 17:58:54.110016  715480 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 17:58:54.110188  715480 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 17:58:54.110264  715480 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 17:58:54.123267  715480 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0314 17:58:54.123301  715480 cache.go:56] Caching tarball of preloaded images
	I0314 17:58:54.123442  715480 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0314 17:58:54.125431  715480 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0314 17:58:54.125461  715480 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0314 17:58:54.163891  715480 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0314 17:58:57.139739  715480 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 17:58:58.630972  715480 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0314 17:58:58.631094  715480 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-765000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-765000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-765000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-484464 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-484464 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.009429609s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-484464
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-484464: exit status 85 (77.265565ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-765000 | jenkins | v1.32.0 | 14 Mar 24 17:58 UTC |                     |
	|         | -p download-only-765000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-765000        | download-only-765000 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| start   | -o=json --download-only        | download-only-484464 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | -p download-only-484464        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:59:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:59:00.752724  715766 out.go:291] Setting OutFile to fd 1 ...
	I0314 17:59:00.752999  715766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:59:00.753011  715766 out.go:304] Setting ErrFile to fd 2...
	I0314 17:59:00.753017  715766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:59:00.753250  715766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 17:59:00.753916  715766 out.go:298] Setting JSON to true
	I0314 17:59:00.754860  715766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9692,"bootTime":1710429449,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 17:59:00.754928  715766 start.go:139] virtualization: kvm guest
	I0314 17:59:00.757103  715766 out.go:97] [download-only-484464] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 17:59:00.758448  715766 out.go:169] MINIKUBE_LOCATION=18384
	I0314 17:59:00.757306  715766 notify.go:220] Checking for updates...
	I0314 17:59:00.761267  715766 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 17:59:00.762623  715766 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 17:59:00.763844  715766 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	I0314 17:59:00.765102  715766 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0314 17:59:00.767445  715766 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 17:59:00.767677  715766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:59:00.788307  715766 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 17:59:00.788403  715766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:59:00.838652  715766 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-14 17:59:00.8298071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:59:00.838760  715766 docker.go:295] overlay module found
	I0314 17:59:00.840350  715766 out.go:97] Using the docker driver based on user configuration
	I0314 17:59:00.840377  715766 start.go:297] selected driver: docker
	I0314 17:59:00.840382  715766 start.go:901] validating driver "docker" against <nil>
	I0314 17:59:00.840478  715766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:59:00.886492  715766 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-03-14 17:59:00.877990747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:59:00.886721  715766 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:59:00.887453  715766 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0314 17:59:00.887659  715766 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 17:59:00.889534  715766 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-484464 host does not exist
	  To start a cluster, run: "minikube start -p download-only-484464"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-484464
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (6.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-095108 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-095108 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.944192604s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (6.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-095108
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-095108: exit status 85 (76.851959ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-765000 | jenkins | v1.32.0 | 14 Mar 24 17:58 UTC |                     |
	|         | -p download-only-765000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-765000           | download-only-765000 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| start   | -o=json --download-only           | download-only-484464 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | -p download-only-484464           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| delete  | -p download-only-484464           | download-only-484464 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC | 14 Mar 24 17:59 UTC |
	| start   | -o=json --download-only           | download-only-095108 | jenkins | v1.32.0 | 14 Mar 24 17:59 UTC |                     |
	|         | -p download-only-095108           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 17:59:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 17:59:06.196278  716054 out.go:291] Setting OutFile to fd 1 ...
	I0314 17:59:06.196404  716054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:59:06.196425  716054 out.go:304] Setting ErrFile to fd 2...
	I0314 17:59:06.196429  716054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 17:59:06.196619  716054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 17:59:06.197226  716054 out.go:298] Setting JSON to true
	I0314 17:59:06.198219  716054 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9698,"bootTime":1710429449,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 17:59:06.198293  716054 start.go:139] virtualization: kvm guest
	I0314 17:59:06.200462  716054 out.go:97] [download-only-095108] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 17:59:06.200652  716054 notify.go:220] Checking for updates...
	I0314 17:59:06.201845  716054 out.go:169] MINIKUBE_LOCATION=18384
	I0314 17:59:06.203307  716054 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 17:59:06.204695  716054 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 17:59:06.206051  716054 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	I0314 17:59:06.207491  716054 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0314 17:59:06.210282  716054 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 17:59:06.210525  716054 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 17:59:06.231799  716054 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 17:59:06.231964  716054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:59:06.281798  716054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-14 17:59:06.272855854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:59:06.281947  716054 docker.go:295] overlay module found
	I0314 17:59:06.283748  716054 out.go:97] Using the docker driver based on user configuration
	I0314 17:59:06.283776  716054 start.go:297] selected driver: docker
	I0314 17:59:06.283787  716054 start.go:901] validating driver "docker" against <nil>
	I0314 17:59:06.283897  716054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 17:59:06.330299  716054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-14 17:59:06.321027155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 17:59:06.330529  716054 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 17:59:06.331231  716054 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0314 17:59:06.331433  716054 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 17:59:06.333314  716054 out.go:169] Using Docker driver with root privileges
	I0314 17:59:06.334867  716054 cni.go:84] Creating CNI manager for ""
	I0314 17:59:06.334882  716054 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 17:59:06.334895  716054 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 17:59:06.335481  716054 start.go:340] cluster config:
	{Name:download-only-095108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-095108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0314 17:59:06.337094  716054 out.go:97] Starting "download-only-095108" primary control-plane node in "download-only-095108" cluster
	I0314 17:59:06.337120  716054 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 17:59:06.338476  716054 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 17:59:06.338500  716054 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0314 17:59:06.338550  716054 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 17:59:06.353979  716054 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 17:59:06.354101  716054 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 17:59:06.354118  716054 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 17:59:06.354122  716054 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 17:59:06.354130  716054 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 17:59:06.365146  716054 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0314 17:59:06.365175  716054 cache.go:56] Caching tarball of preloaded images
	I0314 17:59:06.365312  716054 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0314 17:59:06.366992  716054 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0314 17:59:06.367015  716054 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0314 17:59:06.405446  716054 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0314 17:59:09.667520  716054 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0314 17:59:09.667622  716054 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18384-708595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0314 17:59:10.433500  716054 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
	I0314 17:59:10.433852  716054 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/download-only-095108/config.json ...
	I0314 17:59:10.433885  716054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/download-only-095108/config.json: {Name:mk265aed65f139a505757d65b2ba62dceee20e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 17:59:10.434106  716054 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0314 17:59:10.434274  716054 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18384-708595/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-095108 host does not exist
	  To start a cluster, run: "minikube start -p download-only-095108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-095108
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.24s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-106123 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-106123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-106123
--- PASS: TestDownloadOnlyKic (1.24s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-642543 --alsologtostderr --binary-mirror http://127.0.0.1:43189 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-642543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-642543
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (62.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-596017 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-596017 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (59.778304596s)
helpers_test.go:175: Cleaning up "offline-containerd-596017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-596017
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-596017: (2.295267298s)
--- PASS: TestOffline (62.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-130663
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-130663: exit status 85 (76.78581ms)

                                                
                                                
-- stdout --
	* Profile "addons-130663" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-130663"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-130663
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-130663: exit status 85 (76.005959ms)

                                                
                                                
-- stdout --
	* Profile "addons-130663" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-130663"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (129.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-130663 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-130663 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m9.392492402s)
--- PASS: TestAddons/Setup (129.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-130663 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context addons-130663 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.257969085s)
addons_test.go:232: (dbg) Run:  kubectl --context addons-130663 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:232: (dbg) Non-zero exit: kubectl --context addons-130663 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (98.66382ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.102.100.205:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:232: (dbg) Run:  kubectl --context addons-130663 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-130663 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [59267d2f-2206-471f-a566-7859cceb8e2c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [59267d2f-2206-471f-a566-7859cceb8e2c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004071271s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-130663 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-130663 addons disable ingress-dns --alsologtostderr -v=1: (1.32501833s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-130663 addons disable ingress --alsologtostderr -v=1: (8.135819259s)
--- PASS: TestAddons/parallel/Ingress (29.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-945jd" [06ac6c50-891b-4cd5-8b4d-fca7aed2b1db] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005012873s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-130663
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-130663: (6.216664564s)
--- PASS: TestAddons/parallel/InspektorGadget (11.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.57851ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-fj9bs" [12e34166-87d0-420e-838e-3330b6c08895] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004339149s
addons_test.go:415: (dbg) Run:  kubectl --context addons-130663 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.33s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 12.779378ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-lgjv8" [06a01b4c-26d3-4589-b5c2-583042a56221] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005160108s
addons_test.go:473: (dbg) Run:  kubectl --context addons-130663 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-130663 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.694048897s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 4.115176ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-130663 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-130663 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1563c7e5-960c-4b8f-b097-842a3fbc1d61] Pending
helpers_test.go:344: "task-pv-pod" [1563c7e5-960c-4b8f-b097-842a3fbc1d61] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1563c7e5-960c-4b8f-b097-842a3fbc1d61] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003737771s
addons_test.go:584: (dbg) Run:  kubectl --context addons-130663 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-130663 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-130663 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-130663 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-130663 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-130663 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-130663 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6e3b4af0-85c6-400c-abee-19206170e6a5] Pending
helpers_test.go:344: "task-pv-pod-restore" [6e3b4af0-85c6-400c-abee-19206170e6a5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6e3b4af0-85c6-400c-abee-19206170e6a5] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004195765s
addons_test.go:626: (dbg) Run:  kubectl --context addons-130663 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-130663 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-130663 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-130663 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.585711797s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-130663 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-j9qgd" [59da228d-77b8-40b7-9475-e0a6198d09fa] Pending
helpers_test.go:344: "headlamp-5485c556b-j9qgd" [59da228d-77b8-40b7-9475-e0a6198d09fa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-j9qgd" [59da228d-77b8-40b7-9475-e0a6198d09fa] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003892554s
--- PASS: TestAddons/parallel/Headlamp (12.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-szcg6" [df5db4e2-fa7e-40c1-95fc-936d43023512] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004590107s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-130663
--- PASS: TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-130663 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-130663 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ecefa588-29ca-46a6-92ae-c80e0918c21f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ecefa588-29ca-46a6-92ae-c80e0918c21f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ecefa588-29ca-46a6-92ae-c80e0918c21f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004943272s
addons_test.go:891: (dbg) Run:  kubectl --context addons-130663 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 ssh "cat /opt/local-path-provisioner/pvc-7dd60614-0f46-4901-8336-5ab77e8823c4_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-130663 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-130663 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-130663 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-130663 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.279118245s)
--- PASS: TestAddons/parallel/LocalPath (54.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h2bht" [c23151de-1acc-465e-8784-a45c2aea26e9] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004037788s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-130663
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-fnsss" [e1b5b826-37b3-40e3-8b7e-37a68c321566] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004009059s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-130663 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-130663 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-130663
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-130663: (11.87602832s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-130663
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-130663
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-130663
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (26.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-046913 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-046913 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.33776195s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-046913 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-046913 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-046913 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-046913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-046913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-046913: (1.905556102s)
--- PASS: TestCertOptions (26.85s)

                                                
                                    
x
+
TestCertExpiration (217.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-675269 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-675269 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (29.030030313s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-675269 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-675269 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.175283567s)
helpers_test.go:175: Cleaning up "cert-expiration-675269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-675269
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-675269: (2.27288786s)
--- PASS: TestCertExpiration (217.48s)

                                                
                                    
x
+
TestForceSystemdFlag (29.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-848419 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-848419 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.309924734s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-848419 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-848419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-848419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-848419: (2.890243517s)
--- PASS: TestForceSystemdFlag (29.64s)

                                                
                                    
x
+
TestForceSystemdEnv (38.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-649849 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-649849 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.330355452s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-649849 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-649849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-649849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-649849: (2.229741518s)
--- PASS: TestForceSystemdEnv (38.83s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.71s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-941623 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-941623 --driver=docker  --container-runtime=containerd: (23.86963512s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-941623"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SMtvHkJxrPcJ/agent.737076" SSH_AGENT_PID="737077" DOCKER_HOST=ssh://docker@127.0.0.1:33517 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SMtvHkJxrPcJ/agent.737076" SSH_AGENT_PID="737077" DOCKER_HOST=ssh://docker@127.0.0.1:33517 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SMtvHkJxrPcJ/agent.737076" SSH_AGENT_PID="737077" DOCKER_HOST=ssh://docker@127.0.0.1:33517 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.534959202s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SMtvHkJxrPcJ/agent.737076" SSH_AGENT_PID="737077" DOCKER_HOST=ssh://docker@127.0.0.1:33517 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-941623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-941623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-941623: (2.271603751s)
--- PASS: TestDockerEnvContainerd (39.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.27s)

                                                
                                    
x
+
TestErrorSpam/setup (21.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-748974 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-748974 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-748974 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-748974 --driver=docker  --container-runtime=containerd: (21.18765683s)
--- PASS: TestErrorSpam/setup (21.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 stop: (1.709272179s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-748974 --log_dir /tmp/nospam-748974 stop
--- PASS: TestErrorSpam/stop (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18384-708595/.minikube/files/etc/test/nested/copy/715468/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-952553 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-952553 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.031028718s)
--- PASS: TestFunctional/serial/StartWithProxy (48.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-952553 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-952553 --alsologtostderr -v=8: (5.026174173s)
functional_test.go:659: soft start took 5.0282288s for "functional-952553" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-952553 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 cache add registry.k8s.io/pause:3.1: (1.068411274s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 cache add registry.k8s.io/pause:3.3: (1.222389454s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 cache add registry.k8s.io/pause:latest: (1.04178259s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-952553 /tmp/TestFunctionalserialCacheCmdcacheadd_local808678359/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cache add minikube-local-cache-test:functional-952553
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 cache add minikube-local-cache-test:functional-952553: (1.112679725s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cache delete minikube-local-cache-test:functional-952553
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-952553
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.50351ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 cache reload: (1.007787764s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 kubectl -- --context functional-952553 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-952553 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-952553 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-952553 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.578342638s)
functional_test.go:757: restart took 42.578479248s for "functional-952553" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-952553 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 logs: (1.448802647s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 logs --file /tmp/TestFunctionalserialLogsFileCmd1794964119/001/logs.txt
E0314 18:06:25.389030  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:25.394927  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:25.405176  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:25.425518  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:25.465820  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:25.546129  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:25.706536  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 logs --file /tmp/TestFunctionalserialLogsFileCmd1794964119/001/logs.txt: (1.483287033s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-952553 apply -f testdata/invalidsvc.yaml
E0314 18:06:26.027543  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:26.668551  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:06:27.948964  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-952553
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-952553: exit status 115 (345.798627ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31837 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-952553 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-952553 delete -f testdata/invalidsvc.yaml: (1.040368391s)
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 config get cpus
E0314 18:06:30.510111  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 config get cpus: exit status 14 (88.075744ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 config get cpus: exit status 14 (89.188888ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-952553 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-952553 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 756999: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-952553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-952553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.527218ms)

                                                
                                                
-- stdout --
	* [functional-952553] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:06:49.656935  756121 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:06:49.657228  756121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:06:49.657238  756121 out.go:304] Setting ErrFile to fd 2...
	I0314 18:06:49.657242  756121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:06:49.657470  756121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:06:49.658158  756121 out.go:298] Setting JSON to false
	I0314 18:06:49.659654  756121 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10161,"bootTime":1710429449,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:06:49.659738  756121 start.go:139] virtualization: kvm guest
	I0314 18:06:49.661992  756121 out.go:177] * [functional-952553] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:06:49.664083  756121 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:06:49.664085  756121 notify.go:220] Checking for updates...
	I0314 18:06:49.665619  756121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:06:49.667260  756121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 18:06:49.668667  756121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	I0314 18:06:49.669987  756121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:06:49.671448  756121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:06:49.673232  756121 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:06:49.673791  756121 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:06:49.699982  756121 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:06:49.700107  756121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:06:49.759428  756121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:58 SystemTime:2024-03-14 18:06:49.747228961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 18:06:49.759538  756121 docker.go:295] overlay module found
	I0314 18:06:49.761566  756121 out.go:177] * Using the docker driver based on existing profile
	I0314 18:06:49.762931  756121 start.go:297] selected driver: docker
	I0314 18:06:49.762945  756121 start.go:901] validating driver "docker" against &{Name:functional-952553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-952553 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:06:49.763061  756121 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:06:49.765139  756121 out.go:177] 
	W0314 18:06:49.766337  756121 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0314 18:06:49.767674  756121 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-952553 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-952553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-952553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (163.642265ms)

                                                
                                                
-- stdout --
	* [functional-952553] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:06:44.849214  754448 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:06:44.849376  754448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:06:44.849392  754448 out.go:304] Setting ErrFile to fd 2...
	I0314 18:06:44.849399  754448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:06:44.849738  754448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:06:44.850331  754448 out.go:298] Setting JSON to false
	I0314 18:06:44.851442  754448 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10156,"bootTime":1710429449,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:06:44.851521  754448 start.go:139] virtualization: kvm guest
	I0314 18:06:44.854035  754448 out.go:177] * [functional-952553] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0314 18:06:44.855906  754448 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:06:44.855909  754448 notify.go:220] Checking for updates...
	I0314 18:06:44.857403  754448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:06:44.858871  754448 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 18:06:44.860303  754448 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	I0314 18:06:44.861660  754448 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:06:44.862851  754448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:06:44.864486  754448 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:06:44.864987  754448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:06:44.888993  754448 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:06:44.889162  754448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:06:44.937807  754448 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:58 SystemTime:2024-03-14 18:06:44.928189005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 18:06:44.937923  754448 docker.go:295] overlay module found
	I0314 18:06:44.939710  754448 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0314 18:06:44.941137  754448 start.go:297] selected driver: docker
	I0314 18:06:44.941159  754448 start.go:901] validating driver "docker" against &{Name:functional-952553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-952553 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:06:44.941256  754448 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:06:44.943701  754448 out.go:177] 
	W0314 18:06:44.945470  754448 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0314 18:06:44.946970  754448 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-952553 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-952553 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-whhhg" [e05afcec-1f22-4551-85c3-ea41102755f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0314 18:06:35.631323  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-55497b8b78-whhhg" [e05afcec-1f22-4551-85c3-ea41102755f6] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004223883s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31421
functional_test.go:1671: http://192.168.49.2:31421: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-whhhg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31421
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [873d5397-4e14-46b3-bad9-0f6ae5d1db4a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004827022s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-952553 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-952553 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-952553 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-952553 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ced09b3e-a49a-4ce4-a72d-82c9388570b4] Pending
helpers_test.go:344: "sp-pod" [ced09b3e-a49a-4ce4-a72d-82c9388570b4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ced09b3e-a49a-4ce4-a72d-82c9388570b4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005532827s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-952553 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-952553 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-952553 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [71d2629e-1b11-4745-a43c-18e87bbfda23] Pending
helpers_test.go:344: "sp-pod" [71d2629e-1b11-4745-a43c-18e87bbfda23] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004315921s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-952553 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh -n functional-952553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cp functional-952553:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1737750621/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh -n functional-952553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh -n functional-952553 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-952553 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-hkjdv" [9ef3c5ae-0aea-466c-8022-50d6404ee36f] Pending
helpers_test.go:344: "mysql-859648c796-hkjdv" [9ef3c5ae-0aea-466c-8022-50d6404ee36f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-hkjdv" [9ef3c5ae-0aea-466c-8022-50d6404ee36f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.015902475s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;": exit status 1 (211.028009ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0314 18:07:06.352546  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;": exit status 1 (116.183572ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;": exit status 1 (102.995467ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/03/14 18:07:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;": exit status 1 (125.355997ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-952553 exec mysql-859648c796-hkjdv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/715468/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo cat /etc/test/nested/copy/715468/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/715468.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo cat /etc/ssl/certs/715468.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/715468.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo cat /usr/share/ca-certificates/715468.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/7154682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo cat /etc/ssl/certs/7154682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/7154682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo cat /usr/share/ca-certificates/7154682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-952553 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh "sudo systemctl is-active docker": exit status 1 (333.315041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh "sudo systemctl is-active crio": exit status 1 (312.80404ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-952553 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-952553
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-952553
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-952553 image ls --format short --alsologtostderr:
I0314 18:06:57.835633  759059 out.go:291] Setting OutFile to fd 1 ...
I0314 18:06:57.835766  759059 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:57.835777  759059 out.go:304] Setting ErrFile to fd 2...
I0314 18:06:57.835783  759059 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:57.836103  759059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
I0314 18:06:57.837006  759059 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:57.837175  759059 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:57.837851  759059 cli_runner.go:164] Run: docker container inspect functional-952553 --format={{.State.Status}}
I0314 18:06:57.859807  759059 ssh_runner.go:195] Run: systemctl --version
I0314 18:06:57.859863  759059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952553
I0314 18:06:57.879587  759059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/functional-952553/id_rsa Username:docker}
I0314 18:06:57.978473  759059 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-952553 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| gcr.io/google-containers/addon-resizer      | functional-952553  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/minikube-local-cache-test | functional-952553  | sha256:8ddaea | 1.01kB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/nginx                     | alpine             | sha256:6913ed | 18MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| docker.io/library/nginx                     | latest             | sha256:92b11f | 70.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-952553 image ls --format table --alsologtostderr:
I0314 18:06:58.426375  759277 out.go:291] Setting OutFile to fd 1 ...
I0314 18:06:58.426615  759277 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:58.426624  759277 out.go:304] Setting ErrFile to fd 2...
I0314 18:06:58.426628  759277 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:58.426873  759277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
I0314 18:06:58.427539  759277 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:58.427665  759277 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:58.428129  759277 cli_runner.go:164] Run: docker container inspect functional-952553 --format={{.State.Status}}
I0314 18:06:58.448440  759277 ssh_runner.go:195] Run: systemctl --version
I0314 18:06:58.448517  759277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952553
I0314 18:06:58.468079  759277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/functional-952553/id_rsa Username:docker}
I0314 18:06:58.569926  759277 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-952553 image ls --format json --alsologtostderr:
[{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-952553"],"size":"10823156"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"245
81402"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:8ddaea4f2695f8cb34675b6408989ae47d894160118fff2a30facef68c9dd18c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-952553"],"size":"1007"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"i
d":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:c7d1297425461d3e2
4fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17979704"},{"id":"sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"70534964"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size"
:"2395207"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-952553 image ls --format json --alsologtostderr:
I0314 18:06:58.146091  759162 out.go:291] Setting OutFile to fd 1 ...
I0314 18:06:58.146352  759162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:58.146387  759162 out.go:304] Setting ErrFile to fd 2...
I0314 18:06:58.146403  759162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:58.146701  759162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
I0314 18:06:58.147657  759162 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:58.148337  759162 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:58.149543  759162 cli_runner.go:164] Run: docker container inspect functional-952553 --format={{.State.Status}}
I0314 18:06:58.173801  759162 ssh_runner.go:195] Run: systemctl --version
I0314 18:06:58.173868  759162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952553
I0314 18:06:58.193951  759162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/functional-952553/id_rsa Username:docker}
I0314 18:06:58.301967  759162 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-952553 image ls --format yaml --alsologtostderr:
- id: sha256:8ddaea4f2695f8cb34675b6408989ae47d894160118fff2a30facef68c9dd18c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-952553
size: "1007"
- id: sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "70534964"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17979704"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-952553
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-952553 image ls --format yaml --alsologtostderr:
I0314 18:06:57.886443  759078 out.go:291] Setting OutFile to fd 1 ...
I0314 18:06:57.886574  759078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:57.886584  759078 out.go:304] Setting ErrFile to fd 2...
I0314 18:06:57.886590  759078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:57.886854  759078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
I0314 18:06:57.887504  759078 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:57.887652  759078 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:57.888168  759078 cli_runner.go:164] Run: docker container inspect functional-952553 --format={{.State.Status}}
I0314 18:06:57.908967  759078 ssh_runner.go:195] Run: systemctl --version
I0314 18:06:57.909026  759078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952553
I0314 18:06:57.928777  759078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/functional-952553/id_rsa Username:docker}
I0314 18:06:58.026604  759078 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh pgrep buildkitd: exit status 1 (295.919094ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image build -t localhost/my-image:functional-952553 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 image build -t localhost/my-image:functional-952553 testdata/build --alsologtostderr: (3.423014179s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-952553 image build -t localhost/my-image:functional-952553 testdata/build --alsologtostderr:
I0314 18:06:58.391267  759262 out.go:291] Setting OutFile to fd 1 ...
I0314 18:06:58.391416  759262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:58.391428  759262 out.go:304] Setting ErrFile to fd 2...
I0314 18:06:58.391434  759262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:06:58.391677  759262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
I0314 18:06:58.392288  759262 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:58.393049  759262 config.go:182] Loaded profile config "functional-952553": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 18:06:58.393558  759262 cli_runner.go:164] Run: docker container inspect functional-952553 --format={{.State.Status}}
I0314 18:06:58.418576  759262 ssh_runner.go:195] Run: systemctl --version
I0314 18:06:58.418641  759262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-952553
I0314 18:06:58.440718  759262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/functional-952553/id_rsa Username:docker}
I0314 18:06:58.550177  759262 build_images.go:161] Building image from path: /tmp/build.982567944.tar
I0314 18:06:58.550241  759262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0314 18:06:58.560094  759262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.982567944.tar
I0314 18:06:58.563729  759262 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.982567944.tar: stat -c "%s %y" /var/lib/minikube/build/build.982567944.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.982567944.tar': No such file or directory
I0314 18:06:58.563765  759262 ssh_runner.go:362] scp /tmp/build.982567944.tar --> /var/lib/minikube/build/build.982567944.tar (3072 bytes)
I0314 18:06:58.612782  759262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.982567944
I0314 18:06:58.624488  759262 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.982567944 -xf /var/lib/minikube/build/build.982567944.tar
I0314 18:06:58.635068  759262 containerd.go:379] Building image: /var/lib/minikube/build/build.982567944
I0314 18:06:58.635142  759262 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.982567944 --local dockerfile=/var/lib/minikube/build/build.982567944 --output type=image,name=localhost/my-image:functional-952553
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:8d1a9c24df07ac2686c2cdc8292623de9341bff5326640f3a9f7baf507b3704d 0.0s done
#8 exporting config sha256:8d6d99154ab1a009970552361c40722ddb54532ab2a6b9990479f56d55f83223 0.0s done
#8 naming to localhost/my-image:functional-952553 done
#8 DONE 0.1s
I0314 18:07:01.716684  759262 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.982567944 --local dockerfile=/var/lib/minikube/build/build.982567944 --output type=image,name=localhost/my-image:functional-952553: (3.081509956s)
I0314 18:07:01.716756  759262 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.982567944
I0314 18:07:01.726567  759262 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.982567944.tar
I0314 18:07:01.736305  759262 build_images.go:217] Built localhost/my-image:functional-952553 from /tmp/build.982567944.tar
I0314 18:07:01.736369  759262 build_images.go:133] succeeded building to: functional-952553
I0314 18:07:01.736375  759262 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.093001284s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-952553
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-952553 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-952553 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-952553 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-952553 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 752351: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "382.069781ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "94.49312ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image load --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 image load --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr: (4.279412527s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-952553 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-952553 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5686433e-e527-48fe-b076-bfbed05925be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5686433e-e527-48fe-b076-bfbed05925be] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004672243s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "407.408801ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "94.596544ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image load --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 image load --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr: (3.281620074s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-952553
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image load --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 image load --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr: (3.733667575s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-952553 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.93.238 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-952553 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-952553 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-952553 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-qsjrh" [4f7b6faf-0a3a-4ab2-a97f-c78ebac100b8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-qsjrh" [4f7b6faf-0a3a-4ab2-a97f-c78ebac100b8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004220557s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image save gcr.io/google-containers/addon-resizer:functional-952553 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdany-port2986137673/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710439604954732516" to /tmp/TestFunctionalparallelMountCmdany-port2986137673/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710439604954732516" to /tmp/TestFunctionalparallelMountCmdany-port2986137673/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710439604954732516" to /tmp/TestFunctionalparallelMountCmdany-port2986137673/001/test-1710439604954732516
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.148243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh -- ls -la /mount-9p
E0314 18:06:45.872275  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 14 18:06 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 14 18:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 14 18:06 test-1710439604954732516
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh cat /mount-9p/test-1710439604954732516
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-952553 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2388408b-cebc-4217-9312-40490dd2f2ae] Pending
helpers_test.go:344: "busybox-mount" [2388408b-cebc-4217-9312-40490dd2f2ae] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2388408b-cebc-4217-9312-40490dd2f2ae] Running
helpers_test.go:344: "busybox-mount" [2388408b-cebc-4217-9312-40490dd2f2ae] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2388408b-cebc-4217-9312-40490dd2f2ae] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.013325344s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-952553 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdany-port2986137673/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image rm gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.022073528s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-952553
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 image save --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-952553 image save --daemon gcr.io/google-containers/addon-resizer:functional-952553 --alsologtostderr: (1.15255967s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-952553
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 service list -o json
functional_test.go:1490: Took "533.937847ms" to run "out/minikube-linux-amd64 -p functional-952553 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32661
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32661
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdspecific-port1007258524/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.426635ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdspecific-port1007258524/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh "sudo umount -f /mount-9p": exit status 1 (292.303908ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-952553 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdspecific-port1007258524/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2740186278/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2740186278/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2740186278/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T" /mount1: exit status 1 (373.189363ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-952553 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-952553 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2740186278/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2740186278/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-952553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2740186278/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-952553
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-952553
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-952553
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (115.05s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150233 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0314 18:07:47.313499  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:09:09.234497  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-150233 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m54.329545458s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (115.05s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (17.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-150233 -- rollout status deployment/busybox: (15.670122063s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-76pn8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-cwwvt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-qkvbw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-76pn8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-cwwvt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-qkvbw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-76pn8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-cwwvt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-qkvbw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (17.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-76pn8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-76pn8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-cwwvt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-cwwvt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-qkvbw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150233 -- exec busybox-5b5d89c9d6-qkvbw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (22.34s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-150233 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-150233 -v=7 --alsologtostderr: (21.497947006s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (22.34s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-150233 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (16.77s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp testdata/cp-test.txt ha-150233:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1534202535/001/cp-test_ha-150233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233:/home/docker/cp-test.txt ha-150233-m02:/home/docker/cp-test_ha-150233_ha-150233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test_ha-150233_ha-150233-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233:/home/docker/cp-test.txt ha-150233-m03:/home/docker/cp-test_ha-150233_ha-150233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test_ha-150233_ha-150233-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233:/home/docker/cp-test.txt ha-150233-m04:/home/docker/cp-test_ha-150233_ha-150233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test_ha-150233_ha-150233-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp testdata/cp-test.txt ha-150233-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1534202535/001/cp-test_ha-150233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m02:/home/docker/cp-test.txt ha-150233:/home/docker/cp-test_ha-150233-m02_ha-150233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test_ha-150233-m02_ha-150233.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m02:/home/docker/cp-test.txt ha-150233-m03:/home/docker/cp-test_ha-150233-m02_ha-150233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test_ha-150233-m02_ha-150233-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m02:/home/docker/cp-test.txt ha-150233-m04:/home/docker/cp-test_ha-150233-m02_ha-150233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test_ha-150233-m02_ha-150233-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp testdata/cp-test.txt ha-150233-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1534202535/001/cp-test_ha-150233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m03:/home/docker/cp-test.txt ha-150233:/home/docker/cp-test_ha-150233-m03_ha-150233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test_ha-150233-m03_ha-150233.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m03:/home/docker/cp-test.txt ha-150233-m02:/home/docker/cp-test_ha-150233-m03_ha-150233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test_ha-150233-m03_ha-150233-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m03:/home/docker/cp-test.txt ha-150233-m04:/home/docker/cp-test_ha-150233-m03_ha-150233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test_ha-150233-m03_ha-150233-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp testdata/cp-test.txt ha-150233-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1534202535/001/cp-test_ha-150233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m04:/home/docker/cp-test.txt ha-150233:/home/docker/cp-test_ha-150233-m04_ha-150233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233 "sudo cat /home/docker/cp-test_ha-150233-m04_ha-150233.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m04:/home/docker/cp-test.txt ha-150233-m02:/home/docker/cp-test_ha-150233-m04_ha-150233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m02 "sudo cat /home/docker/cp-test_ha-150233-m04_ha-150233-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 cp ha-150233-m04:/home/docker/cp-test.txt ha-150233-m03:/home/docker/cp-test_ha-150233-m04_ha-150233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 ssh -n ha-150233-m03 "sudo cat /home/docker/cp-test_ha-150233-m04_ha-150233-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (16.77s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-150233 node stop m02 -v=7 --alsologtostderr: (11.864720038s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr: exit status 7 (689.154422ms)

                                                
                                                
-- stdout --
	ha-150233
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150233-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150233-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150233-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:10:22.119055  779512 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:10:22.119187  779512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:10:22.119197  779512 out.go:304] Setting ErrFile to fd 2...
	I0314 18:10:22.119201  779512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:10:22.119430  779512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:10:22.119642  779512 out.go:298] Setting JSON to false
	I0314 18:10:22.119680  779512 mustload.go:65] Loading cluster: ha-150233
	I0314 18:10:22.119725  779512 notify.go:220] Checking for updates...
	I0314 18:10:22.120092  779512 config.go:182] Loaded profile config "ha-150233": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:10:22.120108  779512 status.go:255] checking status of ha-150233 ...
	I0314 18:10:22.120507  779512 cli_runner.go:164] Run: docker container inspect ha-150233 --format={{.State.Status}}
	I0314 18:10:22.138578  779512 status.go:330] ha-150233 host status = "Running" (err=<nil>)
	I0314 18:10:22.138604  779512 host.go:66] Checking if "ha-150233" exists ...
	I0314 18:10:22.138865  779512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-150233
	I0314 18:10:22.155749  779512 host.go:66] Checking if "ha-150233" exists ...
	I0314 18:10:22.156035  779512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:10:22.156091  779512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-150233
	I0314 18:10:22.172746  779512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33532 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/ha-150233/id_rsa Username:docker}
	I0314 18:10:22.286683  779512 ssh_runner.go:195] Run: systemctl --version
	I0314 18:10:22.290671  779512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:10:22.301376  779512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:10:22.353385  779512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:77 SystemTime:2024-03-14 18:10:22.342667523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 18:10:22.354922  779512 kubeconfig.go:125] found "ha-150233" server: "https://192.168.49.254:8443"
	I0314 18:10:22.355258  779512 api_server.go:166] Checking apiserver status ...
	I0314 18:10:22.355309  779512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:10:22.366029  779512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	I0314 18:10:22.374693  779512 api_server.go:182] apiserver freezer: "2:freezer:/docker/bb9a1ede5f21e690b8ba44d7e2aef50fda9cb4008d727755e00a867264d6a3d9/kubepods/burstable/pod66fc34b10ae197ff9f202fafb2b0c177/f68c197b90b8b41bf26301ddcf68a82274c037b082b7f296eabe766486d80179"
	I0314 18:10:22.374766  779512 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb9a1ede5f21e690b8ba44d7e2aef50fda9cb4008d727755e00a867264d6a3d9/kubepods/burstable/pod66fc34b10ae197ff9f202fafb2b0c177/f68c197b90b8b41bf26301ddcf68a82274c037b082b7f296eabe766486d80179/freezer.state
	I0314 18:10:22.382664  779512 api_server.go:204] freezer state: "THAWED"
	I0314 18:10:22.382690  779512 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0314 18:10:22.386768  779512 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0314 18:10:22.386800  779512 status.go:422] ha-150233 apiserver status = Running (err=<nil>)
	I0314 18:10:22.386816  779512 status.go:257] ha-150233 status: &{Name:ha-150233 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:10:22.386842  779512 status.go:255] checking status of ha-150233-m02 ...
	I0314 18:10:22.387084  779512 cli_runner.go:164] Run: docker container inspect ha-150233-m02 --format={{.State.Status}}
	I0314 18:10:22.404778  779512 status.go:330] ha-150233-m02 host status = "Stopped" (err=<nil>)
	I0314 18:10:22.404807  779512 status.go:343] host is not running, skipping remaining checks
	I0314 18:10:22.404813  779512 status.go:257] ha-150233-m02 status: &{Name:ha-150233-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:10:22.404835  779512 status.go:255] checking status of ha-150233-m03 ...
	I0314 18:10:22.405095  779512 cli_runner.go:164] Run: docker container inspect ha-150233-m03 --format={{.State.Status}}
	I0314 18:10:22.421849  779512 status.go:330] ha-150233-m03 host status = "Running" (err=<nil>)
	I0314 18:10:22.421879  779512 host.go:66] Checking if "ha-150233-m03" exists ...
	I0314 18:10:22.422186  779512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-150233-m03
	I0314 18:10:22.439125  779512 host.go:66] Checking if "ha-150233-m03" exists ...
	I0314 18:10:22.439391  779512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:10:22.439428  779512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-150233-m03
	I0314 18:10:22.456599  779512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/ha-150233-m03/id_rsa Username:docker}
	I0314 18:10:22.546348  779512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:10:22.557266  779512 kubeconfig.go:125] found "ha-150233" server: "https://192.168.49.254:8443"
	I0314 18:10:22.557297  779512 api_server.go:166] Checking apiserver status ...
	I0314 18:10:22.557373  779512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:10:22.567241  779512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1455/cgroup
	I0314 18:10:22.576127  779512 api_server.go:182] apiserver freezer: "2:freezer:/docker/1ef41d087538d2a19bb7cba8c5feb858e373ed5f95338e19fe96eaff8c169e17/kubepods/burstable/pode69641024342513e7356bc2f46213af4/ca27fcdca6ab0c9aa78de7f90b9cdd6c3780afb0f273e519ace8fbb9efad631f"
	I0314 18:10:22.576195  779512 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1ef41d087538d2a19bb7cba8c5feb858e373ed5f95338e19fe96eaff8c169e17/kubepods/burstable/pode69641024342513e7356bc2f46213af4/ca27fcdca6ab0c9aa78de7f90b9cdd6c3780afb0f273e519ace8fbb9efad631f/freezer.state
	I0314 18:10:22.583641  779512 api_server.go:204] freezer state: "THAWED"
	I0314 18:10:22.583676  779512 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0314 18:10:22.587543  779512 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0314 18:10:22.587567  779512 status.go:422] ha-150233-m03 apiserver status = Running (err=<nil>)
	I0314 18:10:22.587579  779512 status.go:257] ha-150233-m03 status: &{Name:ha-150233-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:10:22.587604  779512 status.go:255] checking status of ha-150233-m04 ...
	I0314 18:10:22.587839  779512 cli_runner.go:164] Run: docker container inspect ha-150233-m04 --format={{.State.Status}}
	I0314 18:10:22.605936  779512 status.go:330] ha-150233-m04 host status = "Running" (err=<nil>)
	I0314 18:10:22.605969  779512 host.go:66] Checking if "ha-150233-m04" exists ...
	I0314 18:10:22.606205  779512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-150233-m04
	I0314 18:10:22.623743  779512 host.go:66] Checking if "ha-150233-m04" exists ...
	I0314 18:10:22.624001  779512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:10:22.624041  779512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-150233-m04
	I0314 18:10:22.640961  779512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/ha-150233-m04/id_rsa Username:docker}
	I0314 18:10:22.734226  779512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:10:22.744623  779512 status.go:257] ha-150233-m04 status: &{Name:ha-150233-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (15.37s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-150233 node start m02 -v=7 --alsologtostderr: (14.44488151s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (15.37s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.7s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.70s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (109.36s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-150233 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-150233 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-150233 -v=7 --alsologtostderr: (26.35370082s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150233 --wait=true -v=7 --alsologtostderr
E0314 18:11:25.389656  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:11:31.880952  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:31.886249  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:31.896539  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:31.916827  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:31.957151  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:32.037426  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:32.197865  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:32.518146  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:33.159020  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:34.440107  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:37.000310  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:42.120908  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:52.361962  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:11:53.075232  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:12:12.842858  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-150233 --wait=true -v=7 --alsologtostderr: (1m22.878354001s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-150233
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (109.36s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (9.94s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-150233 node delete m03 -v=7 --alsologtostderr: (9.164805146s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (9.94s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (35.7s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 stop -v=7 --alsologtostderr
E0314 18:12:53.804108  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-150233 stop -v=7 --alsologtostderr: (35.583748928s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr: exit status 7 (111.96605ms)

                                                
                                                
-- stdout --
	ha-150233
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150233-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150233-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:13:14.746192  794982 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:13:14.746744  794982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:13:14.746764  794982 out.go:304] Setting ErrFile to fd 2...
	I0314 18:13:14.746772  794982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:13:14.747224  794982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:13:14.747589  794982 out.go:298] Setting JSON to false
	I0314 18:13:14.747636  794982 mustload.go:65] Loading cluster: ha-150233
	I0314 18:13:14.747730  794982 notify.go:220] Checking for updates...
	I0314 18:13:14.748364  794982 config.go:182] Loaded profile config "ha-150233": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:13:14.748384  794982 status.go:255] checking status of ha-150233 ...
	I0314 18:13:14.748831  794982 cli_runner.go:164] Run: docker container inspect ha-150233 --format={{.State.Status}}
	I0314 18:13:14.765463  794982 status.go:330] ha-150233 host status = "Stopped" (err=<nil>)
	I0314 18:13:14.765483  794982 status.go:343] host is not running, skipping remaining checks
	I0314 18:13:14.765490  794982 status.go:257] ha-150233 status: &{Name:ha-150233 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:13:14.765527  794982 status.go:255] checking status of ha-150233-m02 ...
	I0314 18:13:14.765781  794982 cli_runner.go:164] Run: docker container inspect ha-150233-m02 --format={{.State.Status}}
	I0314 18:13:14.781690  794982 status.go:330] ha-150233-m02 host status = "Stopped" (err=<nil>)
	I0314 18:13:14.781711  794982 status.go:343] host is not running, skipping remaining checks
	I0314 18:13:14.781719  794982 status.go:257] ha-150233-m02 status: &{Name:ha-150233-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:13:14.781744  794982 status.go:255] checking status of ha-150233-m04 ...
	I0314 18:13:14.782106  794982 cli_runner.go:164] Run: docker container inspect ha-150233-m04 --format={{.State.Status}}
	I0314 18:13:14.797044  794982 status.go:330] ha-150233-m04 host status = "Stopped" (err=<nil>)
	I0314 18:13:14.797070  794982 status.go:343] host is not running, skipping remaining checks
	I0314 18:13:14.797077  794982 status.go:257] ha-150233-m04 status: &{Name:ha-150233-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (35.70s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (68.06s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150233 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0314 18:14:15.724753  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-150233 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.267483972s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (68.06s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (42.02s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-150233 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-150233 --control-plane -v=7 --alsologtostderr: (41.174451829s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-150233 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (42.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-987384 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-987384 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (49.990580207s)
--- PASS: TestJSONOutput/start/Command (49.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-987384 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-987384 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-987384 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-987384 --output=json --user=testUser: (5.691096058s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-814930 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-814930 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.676227ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"11a549fa-7cab-4af7-9982-2091167e1f2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-814930] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f34b50a-44d9-43ba-bd80-62c929e43258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"f16feff2-0f8c-44b4-89f1-0a3bb775a7f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b54abc60-6bcb-4cbf-9316-844f3494d75b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig"}}
	{"specversion":"1.0","id":"410eb001-8b15-4dab-a7e0-61e55a721369","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube"}}
	{"specversion":"1.0","id":"f5a48b80-bf23-4431-bc8e-de6f1de65fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"56ffdc93-74e0-455c-b5d2-be73fc9ac0ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8eab0b3f-f94b-4e94-a9b7-5c757e48e02f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-814930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-814930
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-330342 --network=
E0314 18:16:25.389156  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:16:31.881181  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-330342 --network=: (30.665838426s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-330342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-330342
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-330342: (2.043807901s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.73s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-438340 --network=bridge
E0314 18:16:59.565484  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-438340 --network=bridge: (25.535137291s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-438340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-438340
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-438340: (1.927530583s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.48s)

                                                
                                    
x
+
TestKicExistingNetwork (26.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-025707 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-025707 --network=existing-network: (24.870899943s)
helpers_test.go:175: Cleaning up "existing-network-025707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-025707
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-025707: (1.932092402s)
--- PASS: TestKicExistingNetwork (26.94s)

                                                
                                    
x
+
TestKicCustomSubnet (25.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-054110 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-054110 --subnet=192.168.60.0/24: (23.494100875s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-054110 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-054110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-054110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-054110: (2.078009593s)
--- PASS: TestKicCustomSubnet (25.59s)

                                                
                                    
x
+
TestKicStaticIP (28.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-558035 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-558035 --static-ip=192.168.200.200: (26.09775265s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-558035 ip
helpers_test.go:175: Cleaning up "static-ip-558035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-558035
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-558035: (1.997203317s)
--- PASS: TestKicStaticIP (28.23s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (48.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-539648 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-539648 --driver=docker  --container-runtime=containerd: (21.680250855s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-542905 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-542905 --driver=docker  --container-runtime=containerd: (22.408161082s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-539648
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-542905
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-542905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-542905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-542905: (1.869902801s)
helpers_test.go:175: Cleaning up "first-539648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-539648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-539648: (1.874122291s)
--- PASS: TestMinikubeProfile (48.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-727088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-727088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.832544242s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-727088 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-744142 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-744142 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.063059792s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-744142 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-727088 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-727088 --alsologtostderr -v=5: (1.588730669s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-744142 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-744142
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-744142: (1.184462895s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-744142
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-744142: (5.768496271s)
--- PASS: TestMountStart/serial/RestartStopped (6.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-744142 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.077525908s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-123497 -- rollout status deployment/busybox: (1.796525569s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-fzkdv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-t2bcj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-fzkdv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-t2bcj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-fzkdv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-t2bcj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-fzkdv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-fzkdv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-t2bcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-123497 -- exec busybox-5b5d89c9d6-t2bcj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-123497 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-123497 -v 3 --alsologtostderr: (17.357393854s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-123497 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp testdata/cp-test.txt multinode-123497:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile250390641/001/cp-test_multinode-123497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497:/home/docker/cp-test.txt multinode-123497-m02:/home/docker/cp-test_multinode-123497_multinode-123497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m02 "sudo cat /home/docker/cp-test_multinode-123497_multinode-123497-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497:/home/docker/cp-test.txt multinode-123497-m03:/home/docker/cp-test_multinode-123497_multinode-123497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m03 "sudo cat /home/docker/cp-test_multinode-123497_multinode-123497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp testdata/cp-test.txt multinode-123497-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile250390641/001/cp-test_multinode-123497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497-m02:/home/docker/cp-test.txt multinode-123497:/home/docker/cp-test_multinode-123497-m02_multinode-123497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497 "sudo cat /home/docker/cp-test_multinode-123497-m02_multinode-123497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497-m02:/home/docker/cp-test.txt multinode-123497-m03:/home/docker/cp-test_multinode-123497-m02_multinode-123497-m03.txt
E0314 18:21:25.389607  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m03 "sudo cat /home/docker/cp-test_multinode-123497-m02_multinode-123497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp testdata/cp-test.txt multinode-123497-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile250390641/001/cp-test_multinode-123497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497-m03:/home/docker/cp-test.txt multinode-123497:/home/docker/cp-test_multinode-123497-m03_multinode-123497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497 "sudo cat /home/docker/cp-test_multinode-123497-m03_multinode-123497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 cp multinode-123497-m03:/home/docker/cp-test.txt multinode-123497-m02:/home/docker/cp-test_multinode-123497-m03_multinode-123497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 ssh -n multinode-123497-m02 "sudo cat /home/docker/cp-test_multinode-123497-m03_multinode-123497-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-123497 node stop m03: (1.18726407s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123497 status: exit status 7 (476.692127ms)

                                                
                                                
-- stdout --
	multinode-123497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-123497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-123497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr: exit status 7 (479.309168ms)

                                                
                                                
-- stdout --
	multinode-123497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-123497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-123497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:21:30.679567  857087 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:21:30.679713  857087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:21:30.679726  857087 out.go:304] Setting ErrFile to fd 2...
	I0314 18:21:30.679733  857087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:21:30.679966  857087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:21:30.680157  857087 out.go:298] Setting JSON to false
	I0314 18:21:30.680191  857087 mustload.go:65] Loading cluster: multinode-123497
	I0314 18:21:30.680243  857087 notify.go:220] Checking for updates...
	I0314 18:21:30.680564  857087 config.go:182] Loaded profile config "multinode-123497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:21:30.680580  857087 status.go:255] checking status of multinode-123497 ...
	I0314 18:21:30.680962  857087 cli_runner.go:164] Run: docker container inspect multinode-123497 --format={{.State.Status}}
	I0314 18:21:30.697178  857087 status.go:330] multinode-123497 host status = "Running" (err=<nil>)
	I0314 18:21:30.697206  857087 host.go:66] Checking if "multinode-123497" exists ...
	I0314 18:21:30.697504  857087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-123497
	I0314 18:21:30.713422  857087 host.go:66] Checking if "multinode-123497" exists ...
	I0314 18:21:30.713707  857087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:21:30.713770  857087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-123497
	I0314 18:21:30.730655  857087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33652 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/multinode-123497/id_rsa Username:docker}
	I0314 18:21:30.822515  857087 ssh_runner.go:195] Run: systemctl --version
	I0314 18:21:30.826677  857087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:21:30.837160  857087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:21:30.885912  857087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-03-14 18:21:30.875909581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 18:21:30.886848  857087 kubeconfig.go:125] found "multinode-123497" server: "https://192.168.67.2:8443"
	I0314 18:21:30.886870  857087 api_server.go:166] Checking apiserver status ...
	I0314 18:21:30.886917  857087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:21:30.898615  857087 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1572/cgroup
	I0314 18:21:30.907480  857087 api_server.go:182] apiserver freezer: "2:freezer:/docker/15fff3faddcecb4aae9ccd1ed175e6aeb9c37780b6415c8353985e71ca36b8e0/kubepods/burstable/pod9b913c55b05fe9a12231e9a40a89dca2/9ee7d31735d88ff25b63115ff63de516506efa87964b936b72767fa063de6bd7"
	I0314 18:21:30.907551  857087 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15fff3faddcecb4aae9ccd1ed175e6aeb9c37780b6415c8353985e71ca36b8e0/kubepods/burstable/pod9b913c55b05fe9a12231e9a40a89dca2/9ee7d31735d88ff25b63115ff63de516506efa87964b936b72767fa063de6bd7/freezer.state
	I0314 18:21:30.915745  857087 api_server.go:204] freezer state: "THAWED"
	I0314 18:21:30.915777  857087 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0314 18:21:30.919683  857087 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0314 18:21:30.919715  857087 status.go:422] multinode-123497 apiserver status = Running (err=<nil>)
	I0314 18:21:30.919726  857087 status.go:257] multinode-123497 status: &{Name:multinode-123497 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:21:30.919744  857087 status.go:255] checking status of multinode-123497-m02 ...
	I0314 18:21:30.920052  857087 cli_runner.go:164] Run: docker container inspect multinode-123497-m02 --format={{.State.Status}}
	I0314 18:21:30.937686  857087 status.go:330] multinode-123497-m02 host status = "Running" (err=<nil>)
	I0314 18:21:30.937714  857087 host.go:66] Checking if "multinode-123497-m02" exists ...
	I0314 18:21:30.937959  857087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-123497-m02
	I0314 18:21:30.955658  857087 host.go:66] Checking if "multinode-123497-m02" exists ...
	I0314 18:21:30.955965  857087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:21:30.956006  857087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-123497-m02
	I0314 18:21:30.971646  857087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33657 SSHKeyPath:/home/jenkins/minikube-integration/18384-708595/.minikube/machines/multinode-123497-m02/id_rsa Username:docker}
	I0314 18:21:31.066632  857087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:21:31.077512  857087 status.go:257] multinode-123497-m02 status: &{Name:multinode-123497-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:21:31.077549  857087 status.go:255] checking status of multinode-123497-m03 ...
	I0314 18:21:31.077799  857087 cli_runner.go:164] Run: docker container inspect multinode-123497-m03 --format={{.State.Status}}
	I0314 18:21:31.094581  857087 status.go:330] multinode-123497-m03 host status = "Stopped" (err=<nil>)
	I0314 18:21:31.094609  857087 status.go:343] host is not running, skipping remaining checks
	I0314 18:21:31.094624  857087 status.go:257] multinode-123497-m03 status: &{Name:multinode-123497-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 node start m03 -v=7 --alsologtostderr
E0314 18:21:31.880746  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-123497 node start m03 -v=7 --alsologtostderr: (8.110997026s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-123497
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-123497
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-123497: (24.711588147s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123497 --wait=true -v=8 --alsologtostderr
E0314 18:22:48.436045  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123497 --wait=true -v=8 --alsologtostderr: (1m0.185731666s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-123497
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-123497 node delete m03: (4.496062939s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-123497 stop: (23.560556625s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123497 status: exit status 7 (98.931068ms)

                                                
                                                
-- stdout --
	multinode-123497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-123497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr: exit status 7 (94.062129ms)

                                                
                                                
-- stdout --
	multinode-123497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-123497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:23:33.705162  866172 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:23:33.705440  866172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:23:33.705450  866172 out.go:304] Setting ErrFile to fd 2...
	I0314 18:23:33.705455  866172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:23:33.705654  866172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:23:33.705847  866172 out.go:298] Setting JSON to false
	I0314 18:23:33.705878  866172 mustload.go:65] Loading cluster: multinode-123497
	I0314 18:23:33.705948  866172 notify.go:220] Checking for updates...
	I0314 18:23:33.706341  866172 config.go:182] Loaded profile config "multinode-123497": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:23:33.706361  866172 status.go:255] checking status of multinode-123497 ...
	I0314 18:23:33.706803  866172 cli_runner.go:164] Run: docker container inspect multinode-123497 --format={{.State.Status}}
	I0314 18:23:33.723893  866172 status.go:330] multinode-123497 host status = "Stopped" (err=<nil>)
	I0314 18:23:33.723913  866172 status.go:343] host is not running, skipping remaining checks
	I0314 18:23:33.723920  866172 status.go:257] multinode-123497 status: &{Name:multinode-123497 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:23:33.723966  866172 status.go:255] checking status of multinode-123497-m02 ...
	I0314 18:23:33.724221  866172 cli_runner.go:164] Run: docker container inspect multinode-123497-m02 --format={{.State.Status}}
	I0314 18:23:33.739569  866172 status.go:330] multinode-123497-m02 host status = "Stopped" (err=<nil>)
	I0314 18:23:33.739619  866172 status.go:343] host is not running, skipping remaining checks
	I0314 18:23:33.739633  866172 status.go:257] multinode-123497-m02 status: &{Name:multinode-123497-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.125959133s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-123497 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-123497
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123497-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-123497-m02 --driver=docker  --container-runtime=containerd: exit status 14 (79.644416ms)

                                                
                                                
-- stdout --
	* [multinode-123497-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-123497-m02' is duplicated with machine name 'multinode-123497-m02' in profile 'multinode-123497'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-123497-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-123497-m03 --driver=docker  --container-runtime=containerd: (24.498099099s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-123497
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-123497: exit status 80 (279.894722ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-123497 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-123497-m03 already exists in multinode-123497-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-123497-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-123497-m03: (1.885837458s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.80s)

                                                
                                    
x
+
TestPreload (108.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-538860 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-538860 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m7.063289519s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-538860 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-538860
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-538860: (11.895356633s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-538860 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0314 18:26:25.389795  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:26:31.880114  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-538860 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (26.2390794s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-538860 image list
helpers_test.go:175: Cleaning up "test-preload-538860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-538860
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-538860: (2.265811538s)
--- PASS: TestPreload (108.36s)

                                                
                                    
x
+
TestScheduledStopUnix (99.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-949763 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-949763 --memory=2048 --driver=docker  --container-runtime=containerd: (24.099069282s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-949763 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-949763 -n scheduled-stop-949763
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-949763 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-949763 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-949763 -n scheduled-stop-949763
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-949763
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-949763 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0314 18:27:54.926006  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-949763
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-949763: exit status 7 (80.623784ms)

                                                
                                                
-- stdout --
	scheduled-stop-949763
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-949763 -n scheduled-stop-949763
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-949763 -n scheduled-stop-949763: exit status 7 (77.541502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-949763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-949763
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-949763: (4.272235186s)
--- PASS: TestScheduledStopUnix (99.85s)

                                                
                                    
x
+
TestInsufficientStorage (13.14s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-466247 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-466247 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.739419706s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ac2f6f00-808b-4e3a-9ebc-1775c3f43e4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-466247] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aca864b2-2fae-414e-be06-3af47f89fa47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"31d9ff58-917e-4256-aa47-b225e0c93393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3ffcc888-8d4a-4f2d-9589-14d371957b2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig"}}
	{"specversion":"1.0","id":"c4945855-5d56-4e76-8ce6-8d9066bd5af0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube"}}
	{"specversion":"1.0","id":"4ed08250-ab02-407f-8bcb-764dfa4ec85c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c26d909c-0da2-4c49-a7fa-6c6e1541780d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c97ceb8e-1fb5-4e45-8cb2-efbeb4d88628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"deef1b34-6644-48c1-ad5b-6edfeabae2e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f5e88ddf-e2e4-4858-9060-0d5d05241efe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4337331-1d78-432e-916e-c0045e5cbef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a177c899-579e-4077-a21c-b50daaa13be9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-466247\" primary control-plane node in \"insufficient-storage-466247\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fc1110d-052f-487a-ba5f-d9ee2dacad62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1710284843-18375 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"85669f84-ed78-4f8f-872f-4ff4be6be2f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5d49734-cd39-4fa5-97cc-b61aa7a83673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-466247 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-466247 --output=json --layout=cluster: exit status 7 (280.326638ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-466247","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-466247","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:28:35.244858  887476 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-466247" does not appear in /home/jenkins/minikube-integration/18384-708595/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-466247 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-466247 --output=json --layout=cluster: exit status 7 (272.662283ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-466247","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-466247","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:28:35.518226  887562 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-466247" does not appear in /home/jenkins/minikube-integration/18384-708595/kubeconfig
	E0314 18:28:35.527988  887562 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/insufficient-storage-466247/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-466247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-466247
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-466247: (1.842467505s)
--- PASS: TestInsufficientStorage (13.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (57.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2598905177 start -p running-upgrade-221630 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2598905177 start -p running-upgrade-221630 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (28.378302015s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-221630 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-221630 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.483350873s)
helpers_test.go:175: Cleaning up "running-upgrade-221630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-221630
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-221630: (2.652431722s)
--- PASS: TestRunningBinaryUpgrade (57.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (402.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.554037461s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-750763
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-750763: (1.728127842s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-750763 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-750763 status --format={{.Host}}: exit status 7 (89.960746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m43.633194269s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-750763 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (89.208231ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-750763] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-750763
	    minikube start -p kubernetes-upgrade-750763 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7507632 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-750763 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0314 18:36:25.389433  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-750763 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4.862052638s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-750763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-750763
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-750763: (2.890902234s)
--- PASS: TestKubernetesUpgrade (402.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.22s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3942149548 start -p missing-upgrade-904999 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3942149548 start -p missing-upgrade-904999 --memory=2200 --driver=docker  --container-runtime=containerd: (53.74555463s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-904999
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-904999: (13.614434218s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-904999
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-904999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-904999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m13.404887467s)
helpers_test.go:175: Cleaning up "missing-upgrade-904999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-904999
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-904999: (2.006131982s)
--- PASS: TestMissingContainerUpgrade (143.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-612990 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-612990 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (102.869965ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-612990] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-612990 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-612990 --driver=docker  --container-runtime=containerd: (32.923995801s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-612990 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-393587 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-393587 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (222.436525ms)

                                                
                                                
-- stdout --
	* [false-393587] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:28:42.042043  889655 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:28:42.042186  889655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:42.042198  889655 out.go:304] Setting ErrFile to fd 2...
	I0314 18:28:42.042205  889655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:42.042518  889655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-708595/.minikube/bin
	I0314 18:28:42.043354  889655 out.go:298] Setting JSON to false
	I0314 18:28:42.044961  889655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11473,"bootTime":1710429449,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:28:42.045091  889655 start.go:139] virtualization: kvm guest
	I0314 18:28:42.049091  889655 out.go:177] * [false-393587] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:28:42.051568  889655 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:28:42.051585  889655 notify.go:220] Checking for updates...
	I0314 18:28:42.053356  889655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:28:42.055179  889655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-708595/kubeconfig
	I0314 18:28:42.056826  889655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-708595/.minikube
	I0314 18:28:42.058666  889655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:28:42.060293  889655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:28:42.062463  889655 config.go:182] Loaded profile config "NoKubernetes-612990": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:28:42.062629  889655 config.go:182] Loaded profile config "force-systemd-env-649849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:28:42.062771  889655 config.go:182] Loaded profile config "offline-containerd-596017": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 18:28:42.062900  889655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:28:42.094041  889655 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:28:42.094210  889655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:28:42.159757  889655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:86 SystemTime:2024-03-14 18:28:42.149304656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0314 18:28:42.159947  889655 docker.go:295] overlay module found
	I0314 18:28:42.166421  889655 out.go:177] * Using the docker driver based on user configuration
	I0314 18:28:42.168873  889655 start.go:297] selected driver: docker
	I0314 18:28:42.168898  889655 start.go:901] validating driver "docker" against <nil>
	I0314 18:28:42.168917  889655 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:28:42.172356  889655 out.go:177] 
	W0314 18:28:42.174670  889655 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0314 18:28:42.177177  889655 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-393587 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-393587" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-393587

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393587"

                                                
                                                
----------------------- debugLogs end: false-393587 [took: 8.014121212s] --------------------------------
helpers_test.go:175: Cleaning up "false-393587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-393587
--- PASS: TestNetworkPlugins/group/false (8.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-612990 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-612990 --no-kubernetes --driver=docker  --container-runtime=containerd: (13.646378636s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-612990 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-612990 status -o json: exit status 2 (296.962219ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-612990","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-612990
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-612990: (1.977685939s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-612990 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-612990 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.955871018s)
--- PASS: TestNoKubernetes/serial/Start (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-612990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-612990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.59275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.963033024s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-612990
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-612990: (1.287458212s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (136.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2959657728 start -p stopped-upgrade-457538 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2959657728 start -p stopped-upgrade-457538 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m12.064607024s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2959657728 -p stopped-upgrade-457538 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2959657728 -p stopped-upgrade-457538 stop: (22.976815825s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-457538 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0314 18:31:25.389377  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:31:31.880099  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-457538 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.726092772s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (136.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-612990 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-612990 --driver=docker  --container-runtime=containerd: (6.100209679s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-612990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-612990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.340317ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-457538
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestPause/serial/Start (52.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-250884 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-250884 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (52.957550105s)
--- PASS: TestPause/serial/Start (52.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (56.126716321s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (54.418090528s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.42s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-250884 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-250884 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.340315858s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.35s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-250884 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-250884 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-250884 --output=json --layout=cluster: exit status 2 (343.871442ms)

                                                
                                                
-- stdout --
	{"Name":"pause-250884","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-250884","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-250884 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-250884 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-250884 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-250884 --alsologtostderr -v=5: (2.764071881s)
--- PASS: TestPause/serial/DeletePaused (2.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.8s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-250884
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-250884: exit status 1 (17.475288ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-250884: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m9.91560354s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-393587 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-393587 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dmjrn" [d281602e-17a0-4508-b5b1-978dc7fff1a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dmjrn" [d281602e-17a0-4508-b5b1-978dc7fff1a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003962049s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-393587 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.755193927s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-99gz8" [baf462a1-9c18-4a42-adc4-dc118932ef3a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006451275s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-393587 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-393587 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v9x6l" [4e41d349-bede-497f-9a76-6ccdca0fb28a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v9x6l" [4e41d349-bede-497f-9a76-6ccdca0fb28a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005023518s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-393587 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m18.438996233s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-btt22" [944a92b2-4bfa-4dd8-a1ce-ad241cd4456f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005399624s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-393587 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-393587 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wtdcg" [e900f5af-3049-4bf8-9272-3b9375827749] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wtdcg" [e900f5af-3049-4bf8-9272-3b9375827749] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003164114s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-393587 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-393587 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-393587 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5kq9v" [2e9c13ce-51e2-4ba7-9db5-4e1e7719bf14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5kq9v" [2e9c13ce-51e2-4ba7-9db5-4e1e7719bf14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004895263s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-393587 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m19.087002054s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-393587 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.643299244s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-393587 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-393587 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bft48" [d54a11e5-6929-4dd7-8604-a9559b248325] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bft48" [d54a11e5-6929-4dd7-8604-a9559b248325] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004400549s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-393587 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ckczf" [85a01913-b605-4451-a705-b3a6c0d21f23] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00500908s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-641261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-641261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m27.440105433s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-393587 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-393587 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fnmqj" [e0e690d4-fda2-4a48-ad54-95e561dce55a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fnmqj" [e0e690d4-fda2-4a48-ad54-95e561dce55a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004971482s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-831089 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0314 18:36:31.880372  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-831089 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m9.966929995s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-393587 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-393587 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tplmh" [5936ce51-5ab3-48ac-b2cc-d350a7838893] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tplmh" [5936ce51-5ab3-48ac-b2cc-d350a7838893] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004299289s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-393587 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-393587 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-393587 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0314 18:42:17.154452  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:42:18.138952  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:42:25.312789  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-372479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-372479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (52.635788667s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-240724 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-240724 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (54.001186752s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-831089 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [15d24cf4-168c-41b2-b1db-1740ae60f32c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [15d24cf4-168c-41b2-b1db-1740ae60f32c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00386157s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-831089 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-831089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-831089 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-831089 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-831089 --alsologtostderr -v=3: (11.90534604s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-372479 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ab6fc66c-d4b2-4d16-8c3d-7d999a4a2a5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ab6fc66c-d4b2-4d16-8c3d-7d999a4a2a5a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003860781s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-372479 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831089 -n no-preload-831089
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831089 -n no-preload-831089: exit status 7 (82.531969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-831089 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-240724 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a60237c-c138-44bb-85f3-21759796734f] Pending
helpers_test.go:344: "busybox" [5a60237c-c138-44bb-85f3-21759796734f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5a60237c-c138-44bb-85f3-21759796734f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005322745s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-240724 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-831089 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-831089 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m22.512124009s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-831089 -n no-preload-831089
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-372479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-372479 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-372479 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-372479 --alsologtostderr -v=3: (11.967137097s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-240724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-240724 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-240724 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-240724 --alsologtostderr -v=3: (12.035622241s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-372479 -n embed-certs-372479
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-372479 -n embed-certs-372479: exit status 7 (82.160111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-372479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-372479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-372479 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m22.573187222s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-372479 -n embed-certs-372479
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724: exit status 7 (93.200736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-240724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-240724 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0314 18:38:32.314564  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:32.319686  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:32.329996  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:32.350301  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:32.390623  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:32.471249  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:32.631827  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:32.953617  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:33.594163  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:34.875266  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:37.435869  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:38:42.556890  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-240724 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m23.491818811s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-641261 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd7cde3c-ea38-44fa-abc5-a1396e5c2c78] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0314 18:38:52.797690  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cd7cde3c-ea38-44fa-abc5-a1396e5c2c78] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003152733s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-641261 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-641261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-641261 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-641261 --alsologtostderr -v=3
E0314 18:39:00.723086  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:00.728394  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:00.738571  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:00.758856  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:00.799174  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:00.879532  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:01.039978  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:01.361119  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:02.001966  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:03.282268  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:05.842477  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:10.963187  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-641261 --alsologtostderr -v=3: (11.876083327s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-641261 -n old-k8s-version-641261
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-641261 -n old-k8s-version-641261: exit status 7 (85.50267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-641261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (62.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-641261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0314 18:39:13.277983  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:39:21.203933  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:28.436629  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:39:41.470626  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:41.475879  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:41.486107  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:41.506409  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:41.546612  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:41.627036  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:41.684465  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:39:41.787746  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:42.108362  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:42.748602  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:44.029164  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:46.589976  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:51.710734  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:39:54.239016  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/auto-393587/client.crt: no such file or directory
E0314 18:39:57.899805  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:57.905098  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:57.915432  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:57.936323  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:57.976665  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:58.056994  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:58.217625  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:58.538192  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:39:59.179189  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:40:00.460203  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:40:01.951310  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:40:03.020909  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:40:08.141903  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-641261 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (1m1.800996315s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-641261 -n old-k8s-version-641261
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (62.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (53.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0314 18:40:18.382820  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xp8rk" [a6a78cdd-f8a3-46c7-a93c-3f64deeb8558] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0314 18:40:22.431971  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:40:22.645322  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:40:38.864005  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:40:55.231987  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:55.237305  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:55.247592  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:55.268044  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:55.308311  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:55.388811  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:55.549180  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:55.869746  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:56.510242  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:40:57.790607  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:41:00.351625  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xp8rk" [a6a78cdd-f8a3-46c7-a93c-3f64deeb8558] Running
E0314 18:41:03.392376  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/calico-393587/client.crt: no such file or directory
E0314 18:41:05.472362  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 53.003952977s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (53.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xp8rk" [a6a78cdd-f8a3-46c7-a93c-3f64deeb8558] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004060758s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-641261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-641261 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-641261 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-641261 -n old-k8s-version-641261
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-641261 -n old-k8s-version-641261: exit status 2 (311.087926ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-641261 -n old-k8s-version-641261
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-641261 -n old-k8s-version-641261: exit status 2 (301.664715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-641261 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-641261 -n old-k8s-version-641261
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-641261 -n old-k8s-version-641261
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-886779 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0314 18:41:19.824336  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/custom-flannel-393587/client.crt: no such file or directory
E0314 18:41:21.363393  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:21.368668  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:21.378916  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:21.399099  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:21.439398  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:21.519832  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:21.680266  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:22.000598  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:22.641595  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:23.922088  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:25.388820  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/addons-130663/client.crt: no such file or directory
E0314 18:41:26.483287  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:31.603719  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:31.880760  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/functional-952553/client.crt: no such file or directory
E0314 18:41:36.193696  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/enable-default-cni-393587/client.crt: no such file or directory
E0314 18:41:37.174976  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:37.180241  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:37.190485  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:37.210774  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:37.251149  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:37.331513  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:37.492564  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:37.813159  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:38.454268  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:39.735144  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:41.844911  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
E0314 18:41:42.295782  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
E0314 18:41:44.565697  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/kindnet-393587/client.crt: no such file or directory
E0314 18:41:47.416896  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-886779 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (37.12356433s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-886779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-886779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.125717528s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-886779 --alsologtostderr -v=3
E0314 18:41:57.658076  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/bridge-393587/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-886779 --alsologtostderr -v=3: (1.213649916s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-886779 -n newest-cni-886779
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-886779 -n newest-cni-886779: exit status 7 (83.623576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-886779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-886779 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0314 18:42:02.325696  715468 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-708595/.minikube/profiles/flannel-393587/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-886779 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (13.061793742s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-886779 -n newest-cni-886779
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-886779 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-886779 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-886779 -n newest-cni-886779
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-886779 -n newest-cni-886779: exit status 2 (299.26329ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-886779 -n newest-cni-886779
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-886779 -n newest-cni-886779: exit status 2 (314.357213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-886779 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-886779 -n newest-cni-886779
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-886779 -n newest-cni-886779
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-67z5m" [c9e299b4-d101-4efd-aaa3-4573120cf741] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004646392s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-67z5m" [c9e299b4-d101-4efd-aaa3-4573120cf741] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00380742s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-831089 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-831089 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-831089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831089 -n no-preload-831089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831089 -n no-preload-831089: exit status 2 (306.961965ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831089 -n no-preload-831089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831089 -n no-preload-831089: exit status 2 (302.186996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-831089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-831089 -n no-preload-831089
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-831089 -n no-preload-831089
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x268b" [b3300ffb-cb6f-4d54-bfb5-af93960448f0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00327144s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x268b" [b3300ffb-cb6f-4d54-bfb5-af93960448f0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00402511s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-372479 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-h769r" [7b3f7248-1352-4195-a075-0fccb2d57888] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004323538s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-372479 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-372479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-372479 -n embed-certs-372479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-372479 -n embed-certs-372479: exit status 2 (314.405399ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-372479 -n embed-certs-372479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-372479 -n embed-certs-372479: exit status 2 (304.63071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-372479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-372479 -n embed-certs-372479
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-372479 -n embed-certs-372479
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-h769r" [7b3f7248-1352-4195-a075-0fccb2d57888] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003942014s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-240724 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-240724 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-240724 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724: exit status 2 (296.249177ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724: exit status 2 (299.190564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-240724 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-240724 -n default-k8s-diff-port-240724
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                    

Test skip (26/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-393587 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-393587" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-393587

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393587"

                                                
                                                
----------------------- debugLogs end: kubenet-393587 [took: 4.380086868s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-393587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-393587
--- SKIP: TestNetworkPlugins/group/kubenet (4.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-393587 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-393587" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-393587

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-393587" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393587"

                                                
                                                
----------------------- debugLogs end: cilium-393587 [took: 4.267271941s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-393587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-393587
--- SKIP: TestNetworkPlugins/group/cilium (4.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-664755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-664755
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
Copied to clipboard