Test Report: Docker_Linux_containerd 18641

                    
                      dade0c7d2b11ae45e48475c54c974928e476847b:2024-04-15:34033
                    
                

Test fail (1/335)

Order failed test Duration
45 TestAddons/parallel/Headlamp 2.32
x
+
TestAddons/parallel/Headlamp (2.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-798865 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-798865 --alsologtostderr -v=1: exit status 11 (302.223594ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:23:03.544178   23672 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:23:03.546651   23672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:23:03.546876   23672 out.go:304] Setting ErrFile to fd 2...
	I0415 10:23:03.546893   23672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:23:03.547262   23672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:23:03.547656   23672 mustload.go:65] Loading cluster: addons-798865
	I0415 10:23:03.548144   23672 config.go:182] Loaded profile config "addons-798865": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:23:03.548172   23672 addons.go:597] checking whether the cluster is paused
	I0415 10:23:03.548299   23672 config.go:182] Loaded profile config "addons-798865": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:23:03.548315   23672 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:23:03.548911   23672 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:23:03.567553   23672 ssh_runner.go:195] Run: systemctl --version
	I0415 10:23:03.567601   23672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:23:03.585962   23672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:23:03.696669   23672 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0415 10:23:03.696737   23672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0415 10:23:03.738072   23672 cri.go:89] found id: "0df7ab2fe4c965eb2d1c309f8375b50a11adcb39a375f3b221ae29804b1cf282"
	I0415 10:23:03.738104   23672 cri.go:89] found id: "69b852098091de9e08c3a9b9f190fc2105ed92f64257301cf06aa042b8294ce6"
	I0415 10:23:03.738110   23672 cri.go:89] found id: "94cec1f0048ee87ede520f781fb8b7b536fefdb5c50686692b692df9296cb6fe"
	I0415 10:23:03.738115   23672 cri.go:89] found id: "ecff3442f9d6ae376952fe2f3ab2bb37daf92efceedf0dc7099c03903338998a"
	I0415 10:23:03.738120   23672 cri.go:89] found id: "c1b872a415e48603e91c5a8e9151e69c5d923cbc6e923691107cf8b4780c7af7"
	I0415 10:23:03.738126   23672 cri.go:89] found id: "72551be4b4ba10215f7e487db2d14bfcaa98dbf4f3c2e88bf994948ecb5e5a3d"
	I0415 10:23:03.738130   23672 cri.go:89] found id: "2bba070343e09c2da363bdcc438460b483473c2d5af856a5b8485fdbc46867ff"
	I0415 10:23:03.738134   23672 cri.go:89] found id: "e99c5302bfda3823ce8e2157bc03c9ba9961b56300d6583fd6aa156bceb3673d"
	I0415 10:23:03.738139   23672 cri.go:89] found id: "a300ad6c587d25242b303c833915a3dcb391176ad5ca1ba68e9bbec05467a34d"
	I0415 10:23:03.738146   23672 cri.go:89] found id: "80175889af22ae500386888109fe0c356bd24a0c0e38950ac3d5092dc13d3c4a"
	I0415 10:23:03.738156   23672 cri.go:89] found id: "5c7a1c518aeafee789103b3b9faf08f523610a81e863479a04d4f6a9509e9d09"
	I0415 10:23:03.738160   23672 cri.go:89] found id: "1a31752ad2ab55c1af7db660c58f314d07975e40c513c4b734185c4fd618de03"
	I0415 10:23:03.738164   23672 cri.go:89] found id: "4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c"
	I0415 10:23:03.738175   23672 cri.go:89] found id: "a6da399c77c0886e739c5881f9288733b31621972320137cf45f00cc203ae1fa"
	I0415 10:23:03.738181   23672 cri.go:89] found id: "14c338e14be22dacc3a023aa1caf4c43275dca35fbef5f8c423cc13fac77851c"
	I0415 10:23:03.738185   23672 cri.go:89] found id: "f36612796bfc1e876de35a9ac0d7a41096fb35cb74981c0d2ed6896ef9ef5bff"
	I0415 10:23:03.738189   23672 cri.go:89] found id: "c7f3d5d25cf6fde4fb28ec9825bc46ce6d24fd026649aebf098243892db8703e"
	I0415 10:23:03.738194   23672 cri.go:89] found id: "d78f2ce36a4295c8b97e3e9cbeef1c22d6bec44c3cd4677fd5010c67fb0f6962"
	I0415 10:23:03.738198   23672 cri.go:89] found id: "6b85b9a831da935984b506c74932042d811dafa6634cc64c49fe958e4be7fbfa"
	I0415 10:23:03.738202   23672 cri.go:89] found id: "a35d887e071894d5a55d7551c764d30af018495ec0200e97a0d4b7590bf2b220"
	I0415 10:23:03.738208   23672 cri.go:89] found id: "9a17b8bce389bee865cf358885b94a32efe295295c6ebe1a1a1dcd937c15661d"
	I0415 10:23:03.738212   23672 cri.go:89] found id: "42350cad4065ff498a157177ab38190396a5b791be55f5df3712c88d416faac6"
	I0415 10:23:03.738215   23672 cri.go:89] found id: ""
	I0415 10:23:03.738263   23672 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0415 10:23:03.787987   23672 out.go:177] 
	W0415 10:23:03.789236   23672 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-15T10:23:03Z" level=error msg="stat /run/containerd/runc/k8s.io/e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-15T10:23:03Z" level=error msg="stat /run/containerd/runc/k8s.io/e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62: no such file or directory"
	
	W0415 10:23:03.789260   23672 out.go:239] * 
	* 
	W0415 10:23:03.790969   23672 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 10:23:03.792262   23672 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-798865 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-798865
helpers_test.go:235: (dbg) docker inspect addons-798865:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7b1a711a08a8d942dc4c204998df2af27ae2f2f4d7aef426191fae5cc79255c",
	        "Created": "2024-04-15T10:21:04.647432334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 12551,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T10:21:04.959395525Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8e3065bd048af0808d8ea937179eac2f6aaaa6840181cae82f858bfe4571416c",
	        "ResolvConfPath": "/var/lib/docker/containers/f7b1a711a08a8d942dc4c204998df2af27ae2f2f4d7aef426191fae5cc79255c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7b1a711a08a8d942dc4c204998df2af27ae2f2f4d7aef426191fae5cc79255c/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7b1a711a08a8d942dc4c204998df2af27ae2f2f4d7aef426191fae5cc79255c/hosts",
	        "LogPath": "/var/lib/docker/containers/f7b1a711a08a8d942dc4c204998df2af27ae2f2f4d7aef426191fae5cc79255c/f7b1a711a08a8d942dc4c204998df2af27ae2f2f4d7aef426191fae5cc79255c-json.log",
	        "Name": "/addons-798865",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-798865:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-798865",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fdc6214ebc55ce84f59152ad1a80246b2e0bc0f6bf11e51e9d367f6848858102-init/diff:/var/lib/docker/overlay2/d4f05ab23cedc634adcfa4f34dc88dc2c5fd716310ed0671b06d1e75b9361d9a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdc6214ebc55ce84f59152ad1a80246b2e0bc0f6bf11e51e9d367f6848858102/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdc6214ebc55ce84f59152ad1a80246b2e0bc0f6bf11e51e9d367f6848858102/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdc6214ebc55ce84f59152ad1a80246b2e0bc0f6bf11e51e9d367f6848858102/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-798865",
	                "Source": "/var/lib/docker/volumes/addons-798865/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-798865",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-798865",
	                "name.minikube.sigs.k8s.io": "addons-798865",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "582bbfc867b2b26f9c85f99d3b5bb76b43768f6ef2285fb9ef089f8092551d98",
	            "SandboxKey": "/var/run/docker/netns/582bbfc867b2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-798865": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "266398564bcb57608840efa5868b84b27d7abe7949ae1974f7c85c8f98833119",
	                    "EndpointID": "062416c6d2796b617a01f67ab91584727c0327eb314e5e666fc6ff7db92c92df",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-798865",
	                        "f7b1a711a08a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-798865 -n addons-798865
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-798865 logs -n 25: (1.204156163s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-548433                                                                     | download-only-548433   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| start   | -o=json --download-only                                                                     | download-only-119984   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | -p download-only-119984                                                                     |                        |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                                                |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-119984                                                                     | download-only-119984   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| start   | -o=json --download-only                                                                     | download-only-695766   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | -p download-only-695766                                                                     |                        |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                                                           |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-695766                                                                     | download-only-695766   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-548433                                                                     | download-only-548433   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-119984                                                                     | download-only-119984   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-695766                                                                     | download-only-695766   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| start   | --download-only -p                                                                          | download-docker-519343 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | download-docker-519343                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | -p download-docker-519343                                                                   | download-docker-519343 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-477246   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | binary-mirror-477246                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:36477                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-477246                                                                     | binary-mirror-477246   | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| addons  | enable dashboard -p                                                                         | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | addons-798865                                                                               |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | addons-798865                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-798865 --wait=true                                                                | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |                |                     |                     |
	| addons  | addons-798865 addons                                                                        | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:22 UTC | 15 Apr 24 10:22 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:23 UTC | 15 Apr 24 10:23 UTC |
	|         | addons-798865                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-798865 ssh cat                                                                       | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:23 UTC | 15 Apr 24 10:23 UTC |
	|         | /opt/local-path-provisioner/pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-798865 addons disable                                                                | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:23 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ip      | addons-798865 ip                                                                            | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:23 UTC | 15 Apr 24 10:23 UTC |
	| addons  | addons-798865 addons disable                                                                | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:23 UTC | 15 Apr 24 10:23 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-798865          | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:23 UTC |                     |
	|         | -p addons-798865                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 10:20:41
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 10:20:41.321256   11879 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:20:41.321506   11879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:41.321518   11879 out.go:304] Setting ErrFile to fd 2...
	I0415 10:20:41.321522   11879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:41.321685   11879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:20:41.322294   11879 out.go:298] Setting JSON to false
	I0415 10:20:41.323264   11879 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":192,"bootTime":1713176249,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 10:20:41.323323   11879 start.go:139] virtualization: kvm guest
	I0415 10:20:41.325521   11879 out.go:177] * [addons-798865] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 10:20:41.327492   11879 out.go:177]   - MINIKUBE_LOCATION=18641
	I0415 10:20:41.329063   11879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:20:41.327485   11879 notify.go:220] Checking for updates...
	I0415 10:20:41.331731   11879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:20:41.333254   11879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	I0415 10:20:41.334751   11879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 10:20:41.336143   11879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 10:20:41.337635   11879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:20:41.356943   11879 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 10:20:41.357055   11879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:41.398890   11879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-15 10:20:41.39008012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:41.398992   11879 docker.go:295] overlay module found
	I0415 10:20:41.401977   11879 out.go:177] * Using the docker driver based on user configuration
	I0415 10:20:41.403509   11879 start.go:297] selected driver: docker
	I0415 10:20:41.403522   11879 start.go:901] validating driver "docker" against <nil>
	I0415 10:20:41.403535   11879 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 10:20:41.404276   11879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:41.447208   11879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-15 10:20:41.439429884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:41.447361   11879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 10:20:41.447593   11879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 10:20:41.449546   11879 out.go:177] * Using Docker driver with root privileges
	I0415 10:20:41.451464   11879 cni.go:84] Creating CNI manager for ""
	I0415 10:20:41.451485   11879 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0415 10:20:41.451495   11879 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 10:20:41.451560   11879 start.go:340] cluster config:
	{Name:addons-798865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-798865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:20:41.453140   11879 out.go:177] * Starting "addons-798865" primary control-plane node in "addons-798865" cluster
	I0415 10:20:41.454563   11879 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0415 10:20:41.456114   11879 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 10:20:41.457455   11879 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 10:20:41.457488   11879 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 10:20:41.457502   11879 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0415 10:20:41.457637   11879 cache.go:56] Caching tarball of preloaded images
	I0415 10:20:41.457732   11879 preload.go:173] Found /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 10:20:41.457744   11879 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0415 10:20:41.458113   11879 profile.go:143] Saving config to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/config.json ...
	I0415 10:20:41.458137   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/config.json: {Name:mkbda19c7bbf230f6ec70d18f6e55ab2a54dffa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:20:41.472680   11879 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 10:20:41.472799   11879 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory
	I0415 10:20:41.472820   11879 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory, skipping pull
	I0415 10:20:41.472830   11879 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in cache, skipping pull
	I0415 10:20:41.472840   11879 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f as a tarball
	I0415 10:20:41.472851   11879 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f from local cache
	I0415 10:20:53.449723   11879 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f from cached tarball
	I0415 10:20:53.449774   11879 cache.go:194] Successfully downloaded all kic artifacts
	I0415 10:20:53.449807   11879 start.go:360] acquireMachinesLock for addons-798865: {Name:mkbda4e6f45afd05a90712e9e2cf9f1a0287f8d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 10:20:53.449896   11879 start.go:364] duration metric: took 69.333µs to acquireMachinesLock for "addons-798865"
	I0415 10:20:53.449918   11879 start.go:93] Provisioning new machine with config: &{Name:addons-798865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-798865 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0415 10:20:53.449993   11879 start.go:125] createHost starting for "" (driver="docker")
	I0415 10:20:53.451830   11879 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0415 10:20:53.452026   11879 start.go:159] libmachine.API.Create for "addons-798865" (driver="docker")
	I0415 10:20:53.452051   11879 client.go:168] LocalClient.Create starting
	I0415 10:20:53.452161   11879 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca.pem
	I0415 10:20:53.561835   11879 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/cert.pem
	I0415 10:20:53.791650   11879 cli_runner.go:164] Run: docker network inspect addons-798865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 10:20:53.804814   11879 cli_runner.go:211] docker network inspect addons-798865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 10:20:53.804893   11879 network_create.go:281] running [docker network inspect addons-798865] to gather additional debugging logs...
	I0415 10:20:53.804921   11879 cli_runner.go:164] Run: docker network inspect addons-798865
	W0415 10:20:53.818282   11879 cli_runner.go:211] docker network inspect addons-798865 returned with exit code 1
	I0415 10:20:53.818305   11879 network_create.go:284] error running [docker network inspect addons-798865]: docker network inspect addons-798865: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-798865 not found
	I0415 10:20:53.818316   11879 network_create.go:286] output of [docker network inspect addons-798865]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-798865 not found
	
	** /stderr **
	I0415 10:20:53.818388   11879 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 10:20:53.833453   11879 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002ded7d0}
	I0415 10:20:53.833498   11879 network_create.go:124] attempt to create docker network addons-798865 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0415 10:20:53.833541   11879 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-798865 addons-798865
	I0415 10:20:53.886919   11879 network_create.go:108] docker network addons-798865 192.168.49.0/24 created
	I0415 10:20:53.886946   11879 kic.go:121] calculated static IP "192.168.49.2" for the "addons-798865" container
	I0415 10:20:53.886994   11879 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 10:20:53.900808   11879 cli_runner.go:164] Run: docker volume create addons-798865 --label name.minikube.sigs.k8s.io=addons-798865 --label created_by.minikube.sigs.k8s.io=true
	I0415 10:20:53.916381   11879 oci.go:103] Successfully created a docker volume addons-798865
	I0415 10:20:53.916442   11879 cli_runner.go:164] Run: docker run --rm --name addons-798865-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-798865 --entrypoint /usr/bin/test -v addons-798865:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 10:20:59.986059   11879 cli_runner.go:217] Completed: docker run --rm --name addons-798865-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-798865 --entrypoint /usr/bin/test -v addons-798865:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib: (6.069561454s)
	I0415 10:20:59.986095   11879 oci.go:107] Successfully prepared a docker volume addons-798865
	I0415 10:20:59.986126   11879 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 10:20:59.986145   11879 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 10:20:59.986222   11879 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-798865:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 10:21:04.585983   11879 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-798865:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir: (4.599721698s)
	I0415 10:21:04.586011   11879 kic.go:203] duration metric: took 4.599864417s to extract preloaded images to volume ...
	W0415 10:21:04.586137   11879 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0415 10:21:04.586241   11879 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0415 10:21:04.632773   11879 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-798865 --name addons-798865 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-798865 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-798865 --network addons-798865 --ip 192.168.49.2 --volume addons-798865:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f
	I0415 10:21:04.967299   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Running}}
	I0415 10:21:04.982818   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:05.000533   11879 cli_runner.go:164] Run: docker exec addons-798865 stat /var/lib/dpkg/alternatives/iptables
	I0415 10:21:05.042599   11879 oci.go:144] the created container "addons-798865" has a running status.
	I0415 10:21:05.042632   11879 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa...
	I0415 10:21:05.330320   11879 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0415 10:21:05.353917   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:05.373605   11879 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0415 10:21:05.373626   11879 kic_runner.go:114] Args: [docker exec --privileged addons-798865 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0415 10:21:05.447153   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:05.468289   11879 machine.go:94] provisionDockerMachine start ...
	I0415 10:21:05.468387   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:05.486204   11879 main.go:141] libmachine: Using SSH client type: native
	I0415 10:21:05.486406   11879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 10:21:05.486420   11879 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 10:21:05.635966   11879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-798865
	
	I0415 10:21:05.635997   11879 ubuntu.go:169] provisioning hostname "addons-798865"
	I0415 10:21:05.636069   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:05.652164   11879 main.go:141] libmachine: Using SSH client type: native
	I0415 10:21:05.652368   11879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 10:21:05.652385   11879 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-798865 && echo "addons-798865" | sudo tee /etc/hostname
	I0415 10:21:05.802498   11879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-798865
	
	I0415 10:21:05.802557   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:05.818270   11879 main.go:141] libmachine: Using SSH client type: native
	I0415 10:21:05.818435   11879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 10:21:05.818452   11879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-798865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-798865/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-798865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 10:21:05.952370   11879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 10:21:05.952399   11879 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18641-3502/.minikube CaCertPath:/home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18641-3502/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18641-3502/.minikube}
	I0415 10:21:05.952423   11879 ubuntu.go:177] setting up certificates
	I0415 10:21:05.952435   11879 provision.go:84] configureAuth start
	I0415 10:21:05.952487   11879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-798865
	I0415 10:21:05.967826   11879 provision.go:143] copyHostCerts
	I0415 10:21:05.967914   11879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18641-3502/.minikube/ca.pem (1082 bytes)
	I0415 10:21:05.968044   11879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18641-3502/.minikube/cert.pem (1123 bytes)
	I0415 10:21:05.968118   11879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18641-3502/.minikube/key.pem (1679 bytes)
	I0415 10:21:05.968184   11879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18641-3502/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca-key.pem org=jenkins.addons-798865 san=[127.0.0.1 192.168.49.2 addons-798865 localhost minikube]
	I0415 10:21:06.052379   11879 provision.go:177] copyRemoteCerts
	I0415 10:21:06.052430   11879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 10:21:06.052461   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:06.067666   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:06.168927   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 10:21:06.190127   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 10:21:06.211211   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 10:21:06.232524   11879 provision.go:87] duration metric: took 280.075484ms to configureAuth
	I0415 10:21:06.232552   11879 ubuntu.go:193] setting minikube options for container-runtime
	I0415 10:21:06.232726   11879 config.go:182] Loaded profile config "addons-798865": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:21:06.232741   11879 machine.go:97] duration metric: took 764.43192ms to provisionDockerMachine
	I0415 10:21:06.232750   11879 client.go:171] duration metric: took 12.780690823s to LocalClient.Create
	I0415 10:21:06.232775   11879 start.go:167] duration metric: took 12.780747853s to libmachine.API.Create "addons-798865"
	I0415 10:21:06.232789   11879 start.go:293] postStartSetup for "addons-798865" (driver="docker")
	I0415 10:21:06.232801   11879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 10:21:06.232849   11879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 10:21:06.232907   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:06.248934   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:06.348852   11879 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 10:21:06.351644   11879 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0415 10:21:06.351681   11879 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0415 10:21:06.351691   11879 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0415 10:21:06.351697   11879 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0415 10:21:06.351708   11879 filesync.go:126] Scanning /home/jenkins/minikube-integration/18641-3502/.minikube/addons for local assets ...
	I0415 10:21:06.351770   11879 filesync.go:126] Scanning /home/jenkins/minikube-integration/18641-3502/.minikube/files for local assets ...
	I0415 10:21:06.351799   11879 start.go:296] duration metric: took 119.003906ms for postStartSetup
	I0415 10:21:06.352128   11879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-798865
	I0415 10:21:06.367256   11879 profile.go:143] Saving config to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/config.json ...
	I0415 10:21:06.367530   11879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:21:06.367573   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:06.383264   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:06.480920   11879 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 10:21:06.484725   11879 start.go:128] duration metric: took 13.034718137s to createHost
	I0415 10:21:06.484746   11879 start.go:83] releasing machines lock for "addons-798865", held for 13.034838263s
	I0415 10:21:06.484793   11879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-798865
	I0415 10:21:06.500270   11879 ssh_runner.go:195] Run: cat /version.json
	I0415 10:21:06.500312   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:06.500396   11879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 10:21:06.500452   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:06.517295   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:06.518131   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:06.608096   11879 ssh_runner.go:195] Run: systemctl --version
	I0415 10:21:06.679085   11879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 10:21:06.683192   11879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0415 10:21:06.704799   11879 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0415 10:21:06.704873   11879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 10:21:06.728372   11879 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0415 10:21:06.728394   11879 start.go:494] detecting cgroup driver to use...
	I0415 10:21:06.728423   11879 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 10:21:06.728463   11879 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 10:21:06.738676   11879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 10:21:06.748627   11879 docker.go:217] disabling cri-docker service (if available) ...
	I0415 10:21:06.748677   11879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 10:21:06.760051   11879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 10:21:06.772064   11879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 10:21:06.843450   11879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 10:21:06.915634   11879 docker.go:233] disabling docker service ...
	I0415 10:21:06.915707   11879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 10:21:06.932828   11879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 10:21:06.942730   11879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 10:21:07.015672   11879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 10:21:07.087321   11879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 10:21:07.097218   11879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 10:21:07.111127   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 10:21:07.119952   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 10:21:07.128797   11879 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 10:21:07.128846   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 10:21:07.138178   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 10:21:07.147347   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 10:21:07.155752   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 10:21:07.164143   11879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 10:21:07.172353   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 10:21:07.181485   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 10:21:07.190279   11879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 10:21:07.199178   11879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 10:21:07.206273   11879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 10:21:07.213475   11879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 10:21:07.287506   11879 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 10:21:07.376557   11879 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0415 10:21:07.376651   11879 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0415 10:21:07.379945   11879 start.go:562] Will wait 60s for crictl version
	I0415 10:21:07.379989   11879 ssh_runner.go:195] Run: which crictl
	I0415 10:21:07.382794   11879 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 10:21:07.412834   11879 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.31
	RuntimeApiVersion:  v1
	I0415 10:21:07.412919   11879 ssh_runner.go:195] Run: containerd --version
	I0415 10:21:07.433443   11879 ssh_runner.go:195] Run: containerd --version
	I0415 10:21:07.455584   11879 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.31 ...
	I0415 10:21:07.457106   11879 cli_runner.go:164] Run: docker network inspect addons-798865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 10:21:07.471323   11879 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0415 10:21:07.474616   11879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 10:21:07.484151   11879 kubeadm.go:877] updating cluster {Name:addons-798865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-798865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 10:21:07.484249   11879 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 10:21:07.484289   11879 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 10:21:07.514934   11879 containerd.go:627] all images are preloaded for containerd runtime.
	I0415 10:21:07.514954   11879 containerd.go:534] Images already preloaded, skipping extraction
	I0415 10:21:07.515003   11879 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 10:21:07.544641   11879 containerd.go:627] all images are preloaded for containerd runtime.
	I0415 10:21:07.544661   11879 cache_images.go:84] Images are preloaded, skipping loading
	I0415 10:21:07.544668   11879 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0415 10:21:07.544768   11879 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-798865 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-798865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 10:21:07.544830   11879 ssh_runner.go:195] Run: sudo crictl info
	I0415 10:21:07.576142   11879 cni.go:84] Creating CNI manager for ""
	I0415 10:21:07.576167   11879 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0415 10:21:07.576178   11879 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 10:21:07.576203   11879 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-798865 NodeName:addons-798865 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 10:21:07.576374   11879 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-798865"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 10:21:07.576442   11879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 10:21:07.585146   11879 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 10:21:07.585194   11879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 10:21:07.592728   11879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0415 10:21:07.607718   11879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 10:21:07.622991   11879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0415 10:21:07.638361   11879 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0415 10:21:07.641430   11879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 10:21:07.650663   11879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 10:21:07.723261   11879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 10:21:07.735096   11879 certs.go:68] Setting up /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865 for IP: 192.168.49.2
	I0415 10:21:07.735115   11879 certs.go:194] generating shared ca certs ...
	I0415 10:21:07.735132   11879 certs.go:226] acquiring lock for ca certs: {Name:mk49c339e6c0b2588bc240d1fb3e89ad66c3799c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:07.735251   11879 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18641-3502/.minikube/ca.key
	I0415 10:21:07.944528   11879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18641-3502/.minikube/ca.crt ...
	I0415 10:21:07.944556   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/ca.crt: {Name:mk85ab80eaffd839300dfd0a97fa9a3393136709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:07.944749   11879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18641-3502/.minikube/ca.key ...
	I0415 10:21:07.944767   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/ca.key: {Name:mk74b4882cb7b28a70ae7340444d5e3bed488e64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:07.944857   11879 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18641-3502/.minikube/proxy-client-ca.key
	I0415 10:21:08.081728   11879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18641-3502/.minikube/proxy-client-ca.crt ...
	I0415 10:21:08.081754   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/proxy-client-ca.crt: {Name:mk50bb0d9a6bbaf90ff202b76e09428f35db4f99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.081932   11879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18641-3502/.minikube/proxy-client-ca.key ...
	I0415 10:21:08.081948   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/proxy-client-ca.key: {Name:mk67cc65772a6447a8793b758d92986d086874d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.082041   11879 certs.go:256] generating profile certs ...
	I0415 10:21:08.082164   11879 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.key
	I0415 10:21:08.082233   11879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt with IP's: []
	I0415 10:21:08.226532   11879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt ...
	I0415 10:21:08.226566   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: {Name:mk25d8a88a81373b4d8faa34f70d947953278e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.226776   11879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.key ...
	I0415 10:21:08.226794   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.key: {Name:mka656550f889cd6957a1245516891f1c3662093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.226913   11879 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.key.daad6f2b
	I0415 10:21:08.226945   11879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.crt.daad6f2b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0415 10:21:08.300851   11879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.crt.daad6f2b ...
	I0415 10:21:08.300881   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.crt.daad6f2b: {Name:mk4e3540e17133f5b23a03bfe3a91d099236c0f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.301088   11879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.key.daad6f2b ...
	I0415 10:21:08.301104   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.key.daad6f2b: {Name:mk6e57bc4b83b7a3249323a81eb7a4ffbef20b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.301202   11879 certs.go:381] copying /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.crt.daad6f2b -> /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.crt
	I0415 10:21:08.301310   11879 certs.go:385] copying /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.key.daad6f2b -> /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.key
	I0415 10:21:08.301384   11879 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.key
	I0415 10:21:08.301407   11879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.crt with IP's: []
	I0415 10:21:08.632195   11879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.crt ...
	I0415 10:21:08.632230   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.crt: {Name:mk213d7aa0b07c1294ff9ea2d5043a02869459ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.632418   11879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.key ...
	I0415 10:21:08.632431   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.key: {Name:mk9fc92d991556866ca0d723f8fbc8656d0325e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:08.632629   11879 certs.go:484] found cert: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca-key.pem (1675 bytes)
	I0415 10:21:08.632662   11879 certs.go:484] found cert: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/ca.pem (1082 bytes)
	I0415 10:21:08.632682   11879 certs.go:484] found cert: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/cert.pem (1123 bytes)
	I0415 10:21:08.632706   11879 certs.go:484] found cert: /home/jenkins/minikube-integration/18641-3502/.minikube/certs/key.pem (1679 bytes)
	I0415 10:21:08.633217   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 10:21:08.654919   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0415 10:21:08.676064   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 10:21:08.696763   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 10:21:08.716971   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0415 10:21:08.737554   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 10:21:08.758097   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 10:21:08.778057   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 10:21:08.799284   11879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18641-3502/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 10:21:08.820440   11879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 10:21:08.835939   11879 ssh_runner.go:195] Run: openssl version
	I0415 10:21:08.840827   11879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 10:21:08.848876   11879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 10:21:08.851704   11879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0415 10:21:08.851752   11879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 10:21:08.857558   11879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 10:21:08.865235   11879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 10:21:08.867897   11879 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 10:21:08.867944   11879 kubeadm.go:391] StartCluster: {Name:addons-798865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-798865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:21:08.868034   11879 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0415 10:21:08.868106   11879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0415 10:21:08.899130   11879 cri.go:89] found id: ""
	I0415 10:21:08.899188   11879 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 10:21:08.906909   11879 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 10:21:08.914619   11879 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0415 10:21:08.914664   11879 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 10:21:08.922819   11879 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 10:21:08.922839   11879 kubeadm.go:156] found existing configuration files:
	
	I0415 10:21:08.922876   11879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 10:21:08.930974   11879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 10:21:08.931029   11879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 10:21:08.938622   11879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 10:21:08.946023   11879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 10:21:08.946067   11879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 10:21:08.953148   11879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 10:21:08.960396   11879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 10:21:08.960439   11879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 10:21:08.967598   11879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 10:21:08.975042   11879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 10:21:08.975088   11879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 10:21:08.982117   11879 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0415 10:21:09.054013   11879 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-gcp\n", err: exit status 1
	I0415 10:21:09.111336   11879 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 10:21:18.952541   11879 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 10:21:18.952711   11879 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 10:21:18.952869   11879 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0415 10:21:18.952951   11879 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-gcp
	I0415 10:21:18.952998   11879 kubeadm.go:309] OS: Linux
	I0415 10:21:18.953059   11879 kubeadm.go:309] CGROUPS_CPU: enabled
	I0415 10:21:18.953118   11879 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0415 10:21:18.953177   11879 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0415 10:21:18.953235   11879 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0415 10:21:18.953299   11879 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0415 10:21:18.953371   11879 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0415 10:21:18.953438   11879 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0415 10:21:18.953512   11879 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0415 10:21:18.953574   11879 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0415 10:21:18.953681   11879 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 10:21:18.953815   11879 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 10:21:18.953931   11879 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 10:21:18.954016   11879 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 10:21:18.955982   11879 out.go:204]   - Generating certificates and keys ...
	I0415 10:21:18.956123   11879 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 10:21:18.956268   11879 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 10:21:18.956366   11879 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 10:21:18.956449   11879 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 10:21:18.956540   11879 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 10:21:18.956632   11879 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 10:21:18.956706   11879 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 10:21:18.956866   11879 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-798865 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0415 10:21:18.956933   11879 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 10:21:18.957078   11879 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-798865 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0415 10:21:18.957168   11879 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 10:21:18.957253   11879 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 10:21:18.957319   11879 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 10:21:18.957398   11879 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 10:21:18.957467   11879 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 10:21:18.957546   11879 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 10:21:18.957620   11879 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 10:21:18.957704   11879 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 10:21:18.957771   11879 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 10:21:18.957871   11879 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 10:21:18.957955   11879 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 10:21:18.960725   11879 out.go:204]   - Booting up control plane ...
	I0415 10:21:18.960846   11879 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 10:21:18.960969   11879 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 10:21:18.961099   11879 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 10:21:18.961235   11879 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 10:21:18.961346   11879 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 10:21:18.961405   11879 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 10:21:18.961603   11879 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 10:21:18.961708   11879 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.001826 seconds
	I0415 10:21:18.961850   11879 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 10:21:18.962013   11879 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 10:21:18.962080   11879 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 10:21:18.962294   11879 kubeadm.go:309] [mark-control-plane] Marking the node addons-798865 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 10:21:18.962383   11879 kubeadm.go:309] [bootstrap-token] Using token: 6ynr25.pfnsmem2g907sqw5
	I0415 10:21:18.965303   11879 out.go:204]   - Configuring RBAC rules ...
	I0415 10:21:18.965419   11879 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 10:21:18.965504   11879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 10:21:18.965642   11879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 10:21:18.965793   11879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 10:21:18.965932   11879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 10:21:18.966050   11879 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 10:21:18.966209   11879 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 10:21:18.966295   11879 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 10:21:18.966357   11879 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 10:21:18.966383   11879 kubeadm.go:309] 
	I0415 10:21:18.966459   11879 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 10:21:18.966468   11879 kubeadm.go:309] 
	I0415 10:21:18.966561   11879 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 10:21:18.966586   11879 kubeadm.go:309] 
	I0415 10:21:18.966621   11879 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 10:21:18.966701   11879 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 10:21:18.966770   11879 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 10:21:18.966779   11879 kubeadm.go:309] 
	I0415 10:21:18.966843   11879 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 10:21:18.966854   11879 kubeadm.go:309] 
	I0415 10:21:18.966916   11879 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 10:21:18.966925   11879 kubeadm.go:309] 
	I0415 10:21:18.966992   11879 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 10:21:18.967096   11879 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 10:21:18.967187   11879 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 10:21:18.967196   11879 kubeadm.go:309] 
	I0415 10:21:18.967302   11879 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 10:21:18.967394   11879 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 10:21:18.967403   11879 kubeadm.go:309] 
	I0415 10:21:18.967496   11879 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ynr25.pfnsmem2g907sqw5 \
	I0415 10:21:18.967629   11879 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44f8812684250d86592266e7246c07e598fabbac4e85f9a18d6ab75f48133c4d \
	I0415 10:21:18.967660   11879 kubeadm.go:309] 	--control-plane 
	I0415 10:21:18.967669   11879 kubeadm.go:309] 
	I0415 10:21:18.967760   11879 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 10:21:18.967768   11879 kubeadm.go:309] 
	I0415 10:21:18.967861   11879 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ynr25.pfnsmem2g907sqw5 \
	I0415 10:21:18.967998   11879 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44f8812684250d86592266e7246c07e598fabbac4e85f9a18d6ab75f48133c4d 
	I0415 10:21:18.968013   11879 cni.go:84] Creating CNI manager for ""
	I0415 10:21:18.968023   11879 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0415 10:21:18.969808   11879 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 10:21:18.971171   11879 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 10:21:18.975290   11879 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 10:21:18.975347   11879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 10:21:18.998048   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 10:21:19.224878   11879 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 10:21:19.224912   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:19.224970   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-798865 minikube.k8s.io/updated_at=2024_04_15T10_21_19_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=9c1d049bb371ec65637a9a4aa595cb21c815e116 minikube.k8s.io/name=addons-798865 minikube.k8s.io/primary=true
	I0415 10:21:19.306165   11879 ops.go:34] apiserver oom_adj: -16
	I0415 10:21:19.306298   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:19.806517   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:20.306410   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:20.806337   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:21.306459   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:21.806996   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:22.306945   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:22.806478   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:23.306906   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:23.807341   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:24.307325   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:24.807211   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:25.306614   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:25.807058   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:26.306992   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:26.806337   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:27.306575   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:27.807114   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:28.307299   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:28.807022   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:29.307362   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:29.806845   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:30.306745   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:30.806880   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:31.306618   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:31.806467   11879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 10:21:31.873259   11879 kubeadm.go:1107] duration metric: took 12.648383049s to wait for elevateKubeSystemPrivileges
	W0415 10:21:31.873303   11879 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 10:21:31.873311   11879 kubeadm.go:393] duration metric: took 23.005371716s to StartCluster
	I0415 10:21:31.873329   11879 settings.go:142] acquiring lock: {Name:mk5cc9269cb301a8f4e0a73136d0967412852914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:31.873454   11879 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:21:31.873852   11879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/kubeconfig: {Name:mkb6467e90b503ea0b8d79c845a843659133c185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:21:31.874024   11879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 10:21:31.874033   11879 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0415 10:21:31.875861   11879 out.go:177] * Verifying Kubernetes components...
	I0415 10:21:31.874123   11879 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0415 10:21:31.874263   11879 config.go:182] Loaded profile config "addons-798865": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:21:31.877117   11879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 10:21:31.877146   11879 addons.go:69] Setting cloud-spanner=true in profile "addons-798865"
	I0415 10:21:31.877158   11879 addons.go:69] Setting metrics-server=true in profile "addons-798865"
	I0415 10:21:31.877171   11879 addons.go:69] Setting default-storageclass=true in profile "addons-798865"
	I0415 10:21:31.877184   11879 addons.go:69] Setting storage-provisioner=true in profile "addons-798865"
	I0415 10:21:31.877193   11879 addons.go:234] Setting addon cloud-spanner=true in "addons-798865"
	I0415 10:21:31.877165   11879 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-798865"
	I0415 10:21:31.877198   11879 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-798865"
	I0415 10:21:31.877204   11879 addons.go:69] Setting gcp-auth=true in profile "addons-798865"
	I0415 10:21:31.877214   11879 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-798865"
	I0415 10:21:31.877225   11879 mustload.go:65] Loading cluster: addons-798865
	I0415 10:21:31.877227   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877240   11879 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-798865"
	I0415 10:21:31.877247   11879 addons.go:69] Setting registry=true in profile "addons-798865"
	I0415 10:21:31.877269   11879 addons.go:234] Setting addon registry=true in "addons-798865"
	I0415 10:21:31.877274   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877151   11879 addons.go:69] Setting yakd=true in profile "addons-798865"
	I0415 10:21:31.877299   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877195   11879 addons.go:234] Setting addon metrics-server=true in "addons-798865"
	I0415 10:21:31.877329   11879 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-798865"
	I0415 10:21:31.877344   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877347   11879 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-798865"
	I0415 10:21:31.877416   11879 config.go:182] Loaded profile config "addons-798865": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:21:31.877543   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877620   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877609   11879 addons.go:69] Setting ingress-dns=true in profile "addons-798865"
	I0415 10:21:31.877651   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877652   11879 addons.go:234] Setting addon ingress-dns=true in "addons-798865"
	I0415 10:21:31.877711   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877733   11879 addons.go:69] Setting volumesnapshots=true in profile "addons-798865"
	I0415 10:21:31.877789   11879 addons.go:234] Setting addon volumesnapshots=true in "addons-798865"
	I0415 10:21:31.877818   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877207   11879 addons.go:234] Setting addon storage-provisioner=true in "addons-798865"
	I0415 10:21:31.877864   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877738   11879 addons.go:69] Setting ingress=true in profile "addons-798865"
	I0415 10:21:31.877921   11879 addons.go:234] Setting addon ingress=true in "addons-798865"
	I0415 10:21:31.877952   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.878175   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.878242   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.878249   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.878256   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877242   11879 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-798865"
	I0415 10:21:31.878412   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.878429   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.878848   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877725   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877159   11879 addons.go:69] Setting inspektor-gadget=true in profile "addons-798865"
	I0415 10:21:31.879236   11879 addons.go:234] Setting addon inspektor-gadget=true in "addons-798865"
	I0415 10:21:31.879268   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877194   11879 addons.go:69] Setting helm-tiller=true in profile "addons-798865"
	I0415 10:21:31.879392   11879 addons.go:234] Setting addon helm-tiller=true in "addons-798865"
	I0415 10:21:31.879415   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.877727   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877749   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.877320   11879 addons.go:234] Setting addon yakd=true in "addons-798865"
	I0415 10:21:31.879891   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.909763   11879 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0415 10:21:31.909035   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.909073   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.909098   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.909128   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.911621   11879 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0415 10:21:31.911638   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0415 10:21:31.911696   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.913082   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0415 10:21:31.914685   11879 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0415 10:21:31.914705   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0415 10:21:31.914753   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.914859   11879 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-798865"
	I0415 10:21:31.914896   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.915419   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.915651   11879 addons.go:234] Setting addon default-storageclass=true in "addons-798865"
	I0415 10:21:31.915685   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:31.916080   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:31.914271   11879 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0415 10:21:31.917853   11879 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 10:21:31.917874   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0415 10:21:31.917925   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.950248   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0415 10:21:31.951811   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0415 10:21:31.953254   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0415 10:21:31.954847   11879 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0415 10:21:31.954821   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0415 10:21:31.957825   11879 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 10:21:31.959279   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0415 10:21:31.959284   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0415 10:21:31.960665   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0415 10:21:31.959341   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.963371   11879 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0415 10:21:31.965227   11879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0415 10:21:31.966818   11879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 10:21:31.965201   11879 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0415 10:21:31.965212   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0415 10:21:31.968518   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0415 10:21:31.970232   11879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0415 10:21:31.984777   11879 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0415 10:21:31.984800   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0415 10:21:31.984851   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.970266   11879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 10:21:31.970335   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.979506   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:31.984022   11879 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 10:21:31.984655   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:31.986692   11879 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 10:21:31.987904   11879 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0415 10:21:31.987943   11879 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 10:21:31.988051   11879 out.go:177]   - Using image docker.io/registry:2.8.3
	I0415 10:21:31.988077   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 10:21:31.989444   11879 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0415 10:21:31.992473   11879 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0415 10:21:31.992489   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0415 10:21:31.992535   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.994278   11879 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0415 10:21:31.994294   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0415 10:21:31.994336   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.991295   11879 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0415 10:21:31.991306   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0415 10:21:31.991365   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.995853   11879 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0415 10:21:31.995908   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:31.997743   11879 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 10:21:31.999326   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0415 10:21:31.999336   11879 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0415 10:21:32.003701   11879 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0415 10:21:32.006484   11879 out.go:177]   - Using image docker.io/busybox:stable
	I0415 10:21:32.007960   11879 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 10:21:32.007980   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0415 10:21:32.008037   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:32.006503   11879 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0415 10:21:32.008310   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0415 10:21:32.008354   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:32.003720   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 10:21:32.008659   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.008676   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:32.003781   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:32.013893   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.031213   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.032066   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.032216   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.035586   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.044669   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.045237   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.054105   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.058672   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.072698   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.072696   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:32.153404   11879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 10:21:32.153511   11879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 10:21:32.363816   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0415 10:21:32.457694   11879 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0415 10:21:32.457778   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0415 10:21:32.552677   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 10:21:32.560259   11879 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0415 10:21:32.560288   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0415 10:21:32.638529   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 10:21:32.639135   11879 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0415 10:21:32.639156   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0415 10:21:32.651010   11879 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0415 10:21:32.651089   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0415 10:21:32.654922   11879 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0415 10:21:32.654996   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0415 10:21:32.660607   11879 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0415 10:21:32.660672   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0415 10:21:32.738256   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 10:21:32.741709   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 10:21:32.750520   11879 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0415 10:21:32.750556   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0415 10:21:32.756556   11879 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0415 10:21:32.756659   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0415 10:21:32.837462   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 10:21:32.841962   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 10:21:32.846082   11879 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0415 10:21:32.846107   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0415 10:21:32.849612   11879 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 10:21:32.849693   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0415 10:21:32.856226   11879 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0415 10:21:32.856253   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0415 10:21:32.943378   11879 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0415 10:21:32.943500   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0415 10:21:32.945755   11879 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0415 10:21:32.945821   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0415 10:21:33.141483   11879 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0415 10:21:33.141578   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0415 10:21:33.143918   11879 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0415 10:21:33.143974   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0415 10:21:33.148643   11879 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 10:21:33.148667   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0415 10:21:33.158060   11879 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0415 10:21:33.158129   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0415 10:21:33.240856   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 10:21:33.544441   11879 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0415 10:21:33.544522   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0415 10:21:33.545877   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 10:21:33.546264   11879 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0415 10:21:33.546294   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0415 10:21:33.637334   11879 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0415 10:21:33.637420   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0415 10:21:33.644777   11879 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.491234606s)
	I0415 10:21:33.645837   11879 node_ready.go:35] waiting up to 6m0s for node "addons-798865" to be "Ready" ...
	I0415 10:21:33.646164   11879 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.492697662s)
	I0415 10:21:33.646225   11879 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0415 10:21:33.649352   11879 node_ready.go:49] node "addons-798865" has status "Ready":"True"
	I0415 10:21:33.649427   11879 node_ready.go:38] duration metric: took 3.510051ms for node "addons-798865" to be "Ready" ...
	I0415 10:21:33.649447   11879 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 10:21:33.657745   11879 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0415 10:21:33.657816   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0415 10:21:33.662960   11879 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rdlqd" in "kube-system" namespace to be "Ready" ...
	I0415 10:21:33.839251   11879 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0415 10:21:33.839284   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0415 10:21:33.940829   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0415 10:21:34.040408   11879 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 10:21:34.040433   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0415 10:21:34.040868   11879 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0415 10:21:34.040888   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0415 10:21:34.050876   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0415 10:21:34.150437   11879 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-798865" context rescaled to 1 replicas
	I0415 10:21:34.249101   11879 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0415 10:21:34.249183   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0415 10:21:34.340348   11879 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0415 10:21:34.340429   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0415 10:21:34.453247   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 10:21:34.537264   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.17340995s)
	I0415 10:21:34.537333   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.984628512s)
	I0415 10:21:34.553239   11879 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0415 10:21:34.553317   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0415 10:21:34.739523   11879 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0415 10:21:34.739624   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0415 10:21:34.740222   11879 pod_ready.go:97] error getting pod "coredns-76f75df574-rdlqd" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-rdlqd" not found
	I0415 10:21:34.740297   11879 pod_ready.go:81] duration metric: took 1.077309242s for pod "coredns-76f75df574-rdlqd" in "kube-system" namespace to be "Ready" ...
	E0415 10:21:34.740321   11879 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-rdlqd" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-rdlqd" not found
	I0415 10:21:34.740340   11879 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wc54r" in "kube-system" namespace to be "Ready" ...
	I0415 10:21:35.041564   11879 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0415 10:21:35.041642   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0415 10:21:35.048076   11879 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 10:21:35.048110   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0415 10:21:35.248780   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 10:21:35.258243   11879 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0415 10:21:35.258273   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0415 10:21:35.547359   11879 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0415 10:21:35.547441   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0415 10:21:35.856799   11879 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 10:21:35.856877   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0415 10:21:36.160204   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 10:21:36.758725   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:38.947345   11879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0415 10:21:38.947480   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:38.970443   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:39.252654   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:39.256469   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.617896206s)
	I0415 10:21:39.256516   11879 addons.go:470] Verifying addon ingress=true in "addons-798865"
	I0415 10:21:39.256551   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.518221013s)
	I0415 10:21:39.258209   11879 out.go:177] * Verifying ingress addon...
	I0415 10:21:39.256637   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.514848902s)
	I0415 10:21:39.256699   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.419137616s)
	I0415 10:21:39.256773   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.414735498s)
	I0415 10:21:39.256812   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.015884157s)
	I0415 10:21:39.256867   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.710922882s)
	I0415 10:21:39.256955   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.316037729s)
	I0415 10:21:39.256994   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.206022953s)
	I0415 10:21:39.260127   11879 addons.go:470] Verifying addon registry=true in "addons-798865"
	I0415 10:21:39.260146   11879 addons.go:470] Verifying addon metrics-server=true in "addons-798865"
	I0415 10:21:39.261725   11879 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-798865 service yakd-dashboard -n yakd-dashboard
	
	I0415 10:21:39.261262   11879 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0415 10:21:39.263181   11879 out.go:177] * Verifying registry addon...
	I0415 10:21:39.265121   11879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0415 10:21:39.342010   11879 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0415 10:21:39.342046   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:39.344191   11879 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0415 10:21:39.344261   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:39.443479   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.99012776s)
	W0415 10:21:39.443530   11879 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 10:21:39.443555   11879 retry.go:31] will retry after 359.788769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 10:21:39.443627   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.194810795s)
	I0415 10:21:39.559660   11879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0415 10:21:39.640335   11879 addons.go:234] Setting addon gcp-auth=true in "addons-798865"
	I0415 10:21:39.640400   11879 host.go:66] Checking if "addons-798865" exists ...
	I0415 10:21:39.640904   11879 cli_runner.go:164] Run: docker container inspect addons-798865 --format={{.State.Status}}
	I0415 10:21:39.662165   11879 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0415 10:21:39.662218   11879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-798865
	I0415 10:21:39.679209   11879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/addons-798865/id_rsa Username:docker}
	I0415 10:21:39.767744   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:39.775332   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:39.804065   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 10:21:40.268395   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:40.269140   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:40.767870   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:40.769321   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:40.968213   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.807954336s)
	I0415 10:21:40.968253   11879 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-798865"
	I0415 10:21:40.970212   11879 out.go:177] * Verifying csi-hostpath-driver addon...
	I0415 10:21:40.972657   11879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0415 10:21:41.044350   11879 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0415 10:21:41.044423   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:41.258585   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:41.261272   11879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.457163185s)
	I0415 10:21:41.261388   11879 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.599197277s)
	I0415 10:21:41.264370   11879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 10:21:41.338314   11879 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0415 10:21:41.339998   11879 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0415 10:21:41.340025   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0415 10:21:41.341558   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:41.342339   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:41.359705   11879 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0415 10:21:41.359739   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0415 10:21:41.377688   11879 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 10:21:41.377712   11879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0415 10:21:41.395331   11879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 10:21:41.478434   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:41.767803   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:41.770422   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:41.979637   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:42.056359   11879 addons.go:470] Verifying addon gcp-auth=true in "addons-798865"
	I0415 10:21:42.057909   11879 out.go:177] * Verifying gcp-auth addon...
	I0415 10:21:42.060205   11879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0415 10:21:42.062757   11879 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0415 10:21:42.062775   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:42.267498   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:42.270213   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:42.477811   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:42.563944   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:42.768172   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:42.769970   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:42.978745   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:43.063556   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:43.268042   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:43.270217   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:43.478537   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:43.563998   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:43.746193   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:43.768146   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:43.768937   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:43.978432   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:44.063002   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:44.267802   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:44.269108   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:44.478182   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:44.563980   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:44.767259   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:44.769411   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:44.977580   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:45.063824   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:45.267216   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:45.269190   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:45.477341   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:45.563452   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:45.747477   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:45.767292   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:45.769360   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:45.977706   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:46.064080   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:46.269719   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:46.272007   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:46.477924   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:46.563497   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:46.767913   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:46.769594   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:46.978704   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:47.073290   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:47.268126   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:47.270007   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:47.478494   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:47.563991   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:47.766756   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:47.769120   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:47.978062   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:48.064431   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:48.246362   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:48.267963   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:48.269210   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:48.478424   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:48.563920   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:48.767865   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:48.769910   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:48.978764   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:49.063866   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:49.267188   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:49.269809   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:49.593717   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:49.667315   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:49.767151   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:49.769193   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:50.043326   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:50.063381   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:50.246536   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:50.267161   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:50.269511   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:50.477757   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:50.595446   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:50.766656   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:50.769241   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:50.977509   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:51.063631   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:51.267814   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:51.270121   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:51.477791   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:51.564137   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:51.767758   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:51.769547   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:51.978624   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:52.063291   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:52.267497   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:52.269850   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:52.478784   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:52.563976   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:52.746195   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:52.767600   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:52.769722   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:52.977830   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:53.064033   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:53.267138   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:53.268968   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:53.478595   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:53.563647   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:53.768147   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:53.769259   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:53.977463   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:54.063624   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:54.267529   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:54.269696   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:54.478080   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:54.563336   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:54.746429   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:54.767181   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:54.769140   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:54.978230   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:55.063304   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:55.268331   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:55.270398   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:55.477433   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:55.563354   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:55.767911   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:55.769080   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:55.977166   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:56.063052   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:56.267330   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:56.269770   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:56.477641   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:56.563796   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:56.767073   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:56.770307   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:56.977999   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:57.063656   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:57.245494   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:57.266498   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:57.268895   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:57.477990   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:57.563238   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:57.767639   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:57.768724   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:57.978167   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:58.063087   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:58.267711   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:58.269482   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:58.478021   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:58.562956   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:58.767455   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:58.769736   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:58.978330   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:59.063416   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:59.246360   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:21:59.266939   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:59.269178   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:59.477174   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:21:59.563247   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:21:59.767826   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:21:59.769989   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:21:59.978286   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:00.063152   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:00.267542   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:00.269711   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:00.477835   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:00.564272   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:00.768163   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:00.769770   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:00.978326   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:01.063676   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:01.249104   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:22:01.266882   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:01.269511   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:01.478122   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:01.563077   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:01.767673   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:01.768783   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:01.977271   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:02.063408   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:02.267639   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:02.269509   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:02.477972   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:02.563914   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:02.767455   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:02.771279   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:02.977460   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:03.063515   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:03.267438   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:03.269665   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:03.478035   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:03.564011   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:03.745913   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:22:03.767334   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:03.769562   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:03.977829   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:04.063960   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:04.267271   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:04.269452   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:04.477419   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:04.563808   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:04.767249   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:04.769381   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:04.978539   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:05.064300   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:05.267611   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:05.268752   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:05.477950   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:05.564129   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:05.745975   11879 pod_ready.go:102] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"False"
	I0415 10:22:05.767116   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:05.769413   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:05.978187   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:06.063978   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:06.267266   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:06.269332   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:06.478298   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:06.564330   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:06.768255   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:06.776852   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:07.042937   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:07.063800   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:07.267559   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:07.270229   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:07.477964   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:07.563216   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:07.746875   11879 pod_ready.go:92] pod "coredns-76f75df574-wc54r" in "kube-system" namespace has status "Ready":"True"
	I0415 10:22:07.746943   11879 pod_ready.go:81] duration metric: took 33.006569323s for pod "coredns-76f75df574-wc54r" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.746961   11879 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.751560   11879 pod_ready.go:92] pod "etcd-addons-798865" in "kube-system" namespace has status "Ready":"True"
	I0415 10:22:07.751582   11879 pod_ready.go:81] duration metric: took 4.612683ms for pod "etcd-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.751596   11879 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.755441   11879 pod_ready.go:92] pod "kube-apiserver-addons-798865" in "kube-system" namespace has status "Ready":"True"
	I0415 10:22:07.755462   11879 pod_ready.go:81] duration metric: took 3.857264ms for pod "kube-apiserver-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.755474   11879 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.759600   11879 pod_ready.go:92] pod "kube-controller-manager-addons-798865" in "kube-system" namespace has status "Ready":"True"
	I0415 10:22:07.759622   11879 pod_ready.go:81] duration metric: took 4.140058ms for pod "kube-controller-manager-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.759634   11879 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c25mb" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.763647   11879 pod_ready.go:92] pod "kube-proxy-c25mb" in "kube-system" namespace has status "Ready":"True"
	I0415 10:22:07.763664   11879 pod_ready.go:81] duration metric: took 4.023245ms for pod "kube-proxy-c25mb" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.763672   11879 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:07.766139   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:07.768658   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:07.980743   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:08.063783   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:08.144531   11879 pod_ready.go:92] pod "kube-scheduler-addons-798865" in "kube-system" namespace has status "Ready":"True"
	I0415 10:22:08.144564   11879 pod_ready.go:81] duration metric: took 380.884923ms for pod "kube-scheduler-addons-798865" in "kube-system" namespace to be "Ready" ...
	I0415 10:22:08.144606   11879 pod_ready.go:38] duration metric: took 34.495136116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 10:22:08.144628   11879 api_server.go:52] waiting for apiserver process to appear ...
	I0415 10:22:08.144687   11879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 10:22:08.157813   11879 api_server.go:72] duration metric: took 36.283758559s to wait for apiserver process to appear ...
	I0415 10:22:08.157836   11879 api_server.go:88] waiting for apiserver healthz status ...
	I0415 10:22:08.157860   11879 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0415 10:22:08.161390   11879 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0415 10:22:08.162453   11879 api_server.go:141] control plane version: v1.29.3
	I0415 10:22:08.162477   11879 api_server.go:131] duration metric: took 4.635281ms to wait for apiserver health ...
	I0415 10:22:08.162485   11879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 10:22:08.267767   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:08.270110   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:08.352440   11879 system_pods.go:59] 19 kube-system pods found
	I0415 10:22:08.352473   11879 system_pods.go:61] "coredns-76f75df574-wc54r" [263051c6-a4c9-42cd-bc1c-0122417ede28] Running
	I0415 10:22:08.352483   11879 system_pods.go:61] "csi-hostpath-attacher-0" [f13c852e-4a5f-4cb0-b02b-cac9ce9e4b8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 10:22:08.352491   11879 system_pods.go:61] "csi-hostpath-resizer-0" [27e5459f-5be1-4628-9c97-1ff04f38db6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 10:22:08.352502   11879 system_pods.go:61] "csi-hostpathplugin-4slgb" [7cf9feed-4602-4423-b1d4-f68afc499676] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 10:22:08.352508   11879 system_pods.go:61] "etcd-addons-798865" [aebdd9e3-555a-4fe9-bfdf-c476c70f16af] Running
	I0415 10:22:08.352514   11879 system_pods.go:61] "kindnet-hl4pc" [4fb2786c-c77a-4ee1-a9b3-723b9e0186e0] Running
	I0415 10:22:08.352519   11879 system_pods.go:61] "kube-apiserver-addons-798865" [1ab7121b-d379-4f04-8f19-4a7d38cc3998] Running
	I0415 10:22:08.352535   11879 system_pods.go:61] "kube-controller-manager-addons-798865" [c2daa98b-28b8-44e3-94ee-1a7e3377745e] Running
	I0415 10:22:08.352547   11879 system_pods.go:61] "kube-ingress-dns-minikube" [eb3f1a55-6d90-4ad8-99ec-7de3326bad9d] Running
	I0415 10:22:08.352554   11879 system_pods.go:61] "kube-proxy-c25mb" [afb5603e-1c45-4513-b465-1a59beeaf2d7] Running
	I0415 10:22:08.352559   11879 system_pods.go:61] "kube-scheduler-addons-798865" [a1a1703c-9f31-4cb5-8a13-ff39b25f3d27] Running
	I0415 10:22:08.352566   11879 system_pods.go:61] "metrics-server-75d6c48ddd-cb5th" [30cef0c5-25a7-4dec-ad25-da36bcf2a50f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 10:22:08.352586   11879 system_pods.go:61] "nvidia-device-plugin-daemonset-ldbwl" [089bfdb5-0cbf-430f-9fa8-e7cc07a01fc0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0415 10:22:08.352599   11879 system_pods.go:61] "registry-7nrz6" [ea92d092-8004-4e2e-8ae8-25103bf4b26f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0415 10:22:08.352608   11879 system_pods.go:61] "registry-proxy-9fjsk" [bb04e702-be5c-4700-ad5a-503fe20c5cb5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 10:22:08.352619   11879 system_pods.go:61] "snapshot-controller-58dbcc7b99-bsfcv" [c4509aba-a486-4274-81b6-efb05eb19d06] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 10:22:08.352632   11879 system_pods.go:61] "snapshot-controller-58dbcc7b99-s2f56" [7fea2266-da20-4b77-9433-2b8d61f8db97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 10:22:08.352639   11879 system_pods.go:61] "storage-provisioner" [c9159644-ad37-4adc-965c-69712bb5367d] Running
	I0415 10:22:08.352648   11879 system_pods.go:61] "tiller-deploy-7b677967b9-86zw7" [c9230124-5d27-4cd5-bdcc-12793f06841e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 10:22:08.352656   11879 system_pods.go:74] duration metric: took 190.165585ms to wait for pod list to return data ...
	I0415 10:22:08.352665   11879 default_sa.go:34] waiting for default service account to be created ...
	I0415 10:22:08.479204   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:08.544855   11879 default_sa.go:45] found service account: "default"
	I0415 10:22:08.544883   11879 default_sa.go:55] duration metric: took 192.209494ms for default service account to be created ...
	I0415 10:22:08.544894   11879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 10:22:08.564132   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:08.751812   11879 system_pods.go:86] 19 kube-system pods found
	I0415 10:22:08.751841   11879 system_pods.go:89] "coredns-76f75df574-wc54r" [263051c6-a4c9-42cd-bc1c-0122417ede28] Running
	I0415 10:22:08.751849   11879 system_pods.go:89] "csi-hostpath-attacher-0" [f13c852e-4a5f-4cb0-b02b-cac9ce9e4b8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 10:22:08.751856   11879 system_pods.go:89] "csi-hostpath-resizer-0" [27e5459f-5be1-4628-9c97-1ff04f38db6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 10:22:08.751866   11879 system_pods.go:89] "csi-hostpathplugin-4slgb" [7cf9feed-4602-4423-b1d4-f68afc499676] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 10:22:08.751871   11879 system_pods.go:89] "etcd-addons-798865" [aebdd9e3-555a-4fe9-bfdf-c476c70f16af] Running
	I0415 10:22:08.751878   11879 system_pods.go:89] "kindnet-hl4pc" [4fb2786c-c77a-4ee1-a9b3-723b9e0186e0] Running
	I0415 10:22:08.751884   11879 system_pods.go:89] "kube-apiserver-addons-798865" [1ab7121b-d379-4f04-8f19-4a7d38cc3998] Running
	I0415 10:22:08.751889   11879 system_pods.go:89] "kube-controller-manager-addons-798865" [c2daa98b-28b8-44e3-94ee-1a7e3377745e] Running
	I0415 10:22:08.751896   11879 system_pods.go:89] "kube-ingress-dns-minikube" [eb3f1a55-6d90-4ad8-99ec-7de3326bad9d] Running
	I0415 10:22:08.751900   11879 system_pods.go:89] "kube-proxy-c25mb" [afb5603e-1c45-4513-b465-1a59beeaf2d7] Running
	I0415 10:22:08.751906   11879 system_pods.go:89] "kube-scheduler-addons-798865" [a1a1703c-9f31-4cb5-8a13-ff39b25f3d27] Running
	I0415 10:22:08.751912   11879 system_pods.go:89] "metrics-server-75d6c48ddd-cb5th" [30cef0c5-25a7-4dec-ad25-da36bcf2a50f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 10:22:08.751921   11879 system_pods.go:89] "nvidia-device-plugin-daemonset-ldbwl" [089bfdb5-0cbf-430f-9fa8-e7cc07a01fc0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0415 10:22:08.751927   11879 system_pods.go:89] "registry-7nrz6" [ea92d092-8004-4e2e-8ae8-25103bf4b26f] Running
	I0415 10:22:08.751933   11879 system_pods.go:89] "registry-proxy-9fjsk" [bb04e702-be5c-4700-ad5a-503fe20c5cb5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 10:22:08.751940   11879 system_pods.go:89] "snapshot-controller-58dbcc7b99-bsfcv" [c4509aba-a486-4274-81b6-efb05eb19d06] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 10:22:08.751946   11879 system_pods.go:89] "snapshot-controller-58dbcc7b99-s2f56" [7fea2266-da20-4b77-9433-2b8d61f8db97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 10:22:08.751951   11879 system_pods.go:89] "storage-provisioner" [c9159644-ad37-4adc-965c-69712bb5367d] Running
	I0415 10:22:08.751960   11879 system_pods.go:89] "tiller-deploy-7b677967b9-86zw7" [c9230124-5d27-4cd5-bdcc-12793f06841e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 10:22:08.751968   11879 system_pods.go:126] duration metric: took 207.067801ms to wait for k8s-apps to be running ...
	I0415 10:22:08.751979   11879 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 10:22:08.752018   11879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:22:08.763018   11879 system_svc.go:56] duration metric: took 11.031183ms WaitForService to wait for kubelet
	I0415 10:22:08.763048   11879 kubeadm.go:576] duration metric: took 36.888992554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 10:22:08.763075   11879 node_conditions.go:102] verifying NodePressure condition ...
	I0415 10:22:08.767818   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:08.768977   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:08.944732   11879 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0415 10:22:08.944761   11879 node_conditions.go:123] node cpu capacity is 8
	I0415 10:22:08.944777   11879 node_conditions.go:105] duration metric: took 181.69439ms to run NodePressure ...
	I0415 10:22:08.944791   11879 start.go:240] waiting for startup goroutines ...
	I0415 10:22:08.978088   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:09.064223   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:09.267220   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:09.269622   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:09.478056   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:09.563192   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:09.768160   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:09.770005   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:10.043264   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:10.063941   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:10.268719   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:10.270598   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:10.478051   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:10.564560   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:10.767203   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:10.769629   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:10.977972   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:11.064391   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:11.267503   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:11.269209   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:11.480340   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:11.563835   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:11.767820   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:11.769691   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:11.978332   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:12.064936   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:12.268261   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:12.270082   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:12.478646   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:12.564015   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:12.769309   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:12.770252   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:12.978805   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:13.063799   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:13.267554   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:13.269516   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:13.480341   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:13.563397   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:13.767307   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:13.769033   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:13.977263   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:14.063391   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:14.267240   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:14.269377   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:14.477742   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:14.563912   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:14.767901   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:14.770099   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:14.978654   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:15.063969   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:15.268052   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:15.270260   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:15.478421   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:15.564537   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:15.767733   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:15.769474   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:15.978677   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:16.064139   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:16.268315   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:16.269691   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:16.478141   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:16.563135   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:16.767842   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:16.768727   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:16.978201   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:17.063769   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:17.268252   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:17.270015   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 10:22:17.480722   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:17.566232   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:17.770607   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:17.771344   11879 kapi.go:107] duration metric: took 38.506220386s to wait for kubernetes.io/minikube-addons=registry ...
	I0415 10:22:17.977767   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:18.066367   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:18.266758   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:18.478291   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:18.696368   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:18.830241   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:18.980699   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:19.064077   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:19.267594   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:19.478836   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:19.563825   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:19.767650   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:19.979286   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:20.063474   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:20.269657   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:20.477502   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:20.563146   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:20.767881   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:20.978724   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:21.064062   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:21.267939   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:21.478464   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:21.563326   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:21.769434   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:21.978530   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:22.063615   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:22.267447   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:22.477537   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:22.563582   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:22.769827   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:22.978374   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:23.064540   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:23.267369   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:23.478286   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:23.563214   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:23.767552   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:23.979091   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:24.064050   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:24.267633   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:24.477530   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:24.565934   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:24.767625   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:24.977465   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:25.063183   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:25.267496   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:25.477390   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:25.563306   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:25.767850   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:25.978936   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:26.064053   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:26.267572   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:26.477664   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:26.563524   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:26.767173   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:26.978319   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:27.063556   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:27.266999   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:27.478262   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:27.563216   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:27.768503   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:27.978677   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:28.064385   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:28.267691   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:28.478483   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:28.563598   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:28.767082   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:28.978494   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:29.063652   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:29.267552   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:29.478304   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:29.563155   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:29.767677   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:29.978117   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:30.064053   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:30.267449   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:30.478426   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:30.563272   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:30.767854   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:30.979510   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:31.063772   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:31.267595   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:31.479477   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:31.564738   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:31.767686   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:31.978818   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:32.064073   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:32.268279   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:32.478275   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:32.564800   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:32.766770   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:32.977881   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:33.063976   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:33.268423   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:33.478765   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:33.564198   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:33.768006   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:33.978841   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:34.064117   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:34.267454   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:34.477723   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:34.564320   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:34.767289   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:34.977391   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:35.064146   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:35.268526   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:35.478054   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:35.564465   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:35.766977   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:35.979040   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:36.063560   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:36.267283   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:36.478206   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:36.563228   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:36.768035   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:36.981607   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:37.063799   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:37.267455   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:37.478367   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:37.563314   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:37.767796   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:37.978897   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:38.064452   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:38.267311   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:38.478182   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 10:22:38.563415   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:38.767162   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:38.978027   11879 kapi.go:107] duration metric: took 58.005367059s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0415 10:22:39.064087   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:39.268198   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:39.563663   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:39.767315   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:40.063651   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:40.267414   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:40.563758   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:40.767536   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:41.064070   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:41.267599   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:41.563412   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:41.767003   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:42.063796   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:42.267292   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:42.563660   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:42.767439   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:43.064167   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:43.267412   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:43.564144   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:43.767668   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:44.064534   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:44.268026   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:44.564694   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:44.767190   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:45.063935   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:45.267444   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:45.563750   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:45.767718   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:46.064551   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:46.270159   11879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 10:22:46.563342   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:46.767766   11879 kapi.go:107] duration metric: took 1m7.506503052s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0415 10:22:47.063224   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:47.563600   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:48.064383   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:48.563613   11879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 10:22:49.063810   11879 kapi.go:107] duration metric: took 1m7.003601885s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0415 10:22:49.065720   11879 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-798865 cluster.
	I0415 10:22:49.067517   11879 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0415 10:22:49.069198   11879 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0415 10:22:49.070805   11879 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, ingress-dns, nvidia-device-plugin, helm-tiller, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0415 10:22:49.072315   11879 addons.go:505] duration metric: took 1m17.198197218s for enable addons: enabled=[cloud-spanner default-storageclass ingress-dns nvidia-device-plugin helm-tiller storage-provisioner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0415 10:22:49.072361   11879 start.go:245] waiting for cluster config update ...
	I0415 10:22:49.072394   11879 start.go:254] writing updated cluster config ...
	I0415 10:22:49.072705   11879 ssh_runner.go:195] Run: rm -f paused
	I0415 10:22:49.120687   11879 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 10:22:49.122799   11879 out.go:177] * Done! kubectl is now configured to use "addons-798865" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	d8d339ea0cf2f       a416a98b71e22       2 seconds ago        Exited              helper-pod                               0                   e32bf97748a92       helper-pod-delete-pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d
	62d322d7b5181       ba5dc23f65d4c       5 seconds ago        Exited              busybox                                  0                   2e277fddf22b6       test-local-path
	db1eafe96e1b0       e45ec2747dd93       10 seconds ago       Exited              gadget                                   3                   f5dfcb26fa839       gadget-47qwr
	1cf5a8f1d4567       a416a98b71e22       13 seconds ago       Exited              helper-pod                               0                   7a19a8e844f12       helper-pod-create-pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d
	aa10914a3c6e3       db2fc13d44d50       16 seconds ago       Running             gcp-auth                                 0                   a55a17636f1cf       gcp-auth-7d69788767-fwq5r
	bc4d2e14d22fa       ffcc66479b5ba       18 seconds ago       Running             controller                               0                   1930fba6ca1c6       ingress-nginx-controller-65496f9567-x6lbz
	0df7ab2fe4c96       738351fd438f0       26 seconds ago       Running             csi-snapshotter                          0                   c4ed757878557       csi-hostpathplugin-4slgb
	69b852098091d       931dbfd16f87c       27 seconds ago       Running             csi-provisioner                          0                   c4ed757878557       csi-hostpathplugin-4slgb
	94cec1f0048ee       e899260153aed       28 seconds ago       Running             liveness-probe                           0                   c4ed757878557       csi-hostpathplugin-4slgb
	ecff3442f9d6a       e255e073c508c       29 seconds ago       Running             hostpath                                 0                   c4ed757878557       csi-hostpathplugin-4slgb
	c1b872a415e48       88ef14a257f42       30 seconds ago       Running             node-driver-registrar                    0                   c4ed757878557       csi-hostpathplugin-4slgb
	72551be4b4ba1       3f39089e90831       30 seconds ago       Running             tiller                                   0                   883bcd32a7760       tiller-deploy-7b677967b9-86zw7
	2bba070343e09       aa61ee9c70bc4       32 seconds ago       Running             volume-snapshot-controller               0                   cc9f10cd420d7       snapshot-controller-58dbcc7b99-s2f56
	08732938db965       31de47c733c91       32 seconds ago       Running             yakd                                     0                   501bee7375509       yakd-dashboard-9947fc6bf-rlf8l
	92a61fb8a1c7c       e16d1e3a10667       38 seconds ago       Running             local-path-provisioner                   0                   da7e28b769b65       local-path-provisioner-78b46b4d5c-hwz6m
	e99c5302bfda3       aa61ee9c70bc4       40 seconds ago       Running             volume-snapshot-controller               0                   37f99cd2155ec       snapshot-controller-58dbcc7b99-bsfcv
	a300ad6c587d2       19a639eda60f0       41 seconds ago       Running             csi-resizer                              0                   f3e7845b01db9       csi-hostpath-resizer-0
	80175889af22a       a1ed5895ba635       43 seconds ago       Running             csi-external-health-monitor-controller   0                   c4ed757878557       csi-hostpathplugin-4slgb
	c8531302f4825       b29d748098e32       44 seconds ago       Exited              patch                                    0                   7507ec407d0a6       ingress-nginx-admission-patch-6z7s7
	c4855706c71b8       b29d748098e32       44 seconds ago       Exited              patch                                    0                   32bf9d6b3e1d0       gcp-auth-certs-patch-btgl5
	5c7a1c518aeaf       59cbb42146a37       44 seconds ago       Running             csi-attacher                             0                   c0b80639b78c2       csi-hostpath-attacher-0
	c085d5273d8ae       b29d748098e32       47 seconds ago       Exited              create                                   0                   8b83ab27ac7f4       ingress-nginx-admission-create-zkzfl
	bbcd8bde90473       b29d748098e32       47 seconds ago       Exited              create                                   0                   b449fe95e27db       gcp-auth-certs-create-ljtcx
	1a31752ad2ab5       f6df8d4b582f4       51 seconds ago       Running             nvidia-device-plugin-ctr                 0                   bcd8abc5e7e33       nvidia-device-plugin-daemonset-ldbwl
	a6da399c77c08       cbb01a7bd410d       57 seconds ago       Running             coredns                                  0                   a2d22b6378308       coredns-76f75df574-wc54r
	14c338e14be22       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   e9f340e5818ea       kube-ingress-dns-minikube
	f36612796bfc1       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   037c2afe2d8a9       storage-provisioner
	c7f3d5d25cf6f       4950bb10b3f87       About a minute ago   Running             kindnet-cni                              0                   d5a1037f79e4e       kindnet-hl4pc
	d78f2ce36a429       a1d263b5dc5b0       About a minute ago   Running             kube-proxy                               0                   1f14760dd6505       kube-proxy-c25mb
	6b85b9a831da9       8c390d98f50c0       About a minute ago   Running             kube-scheduler                           0                   ba8495b04c361       kube-scheduler-addons-798865
	a35d887e07189       39f995c9f1996       About a minute ago   Running             kube-apiserver                           0                   73e216e59b749       kube-apiserver-addons-798865
	9a17b8bce389b       3861cfcd7c04c       About a minute ago   Running             etcd                                     0                   cb0fe766dc23f       etcd-addons-798865
	42350cad4065f       6052a25da3f97       About a minute ago   Running             kube-controller-manager                  0                   e0838888bc93b       kube-controller-manager-addons-798865
	
	
	==> containerd <==
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.569654868Z" level=info msg="Container to stop \"063816ab8ddbec4ee3d803148bfb5d162f703098b7ae34e2c510b94f340e6959\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.593474226Z" level=info msg="TearDown network for sandbox \"4237f3397d0157c4b81bdc11cd2bc3e79da57dcf2f35c48ca85226fb8f76fd97\" successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.593677575Z" level=info msg="StopPodSandbox for \"4237f3397d0157c4b81bdc11cd2bc3e79da57dcf2f35c48ca85226fb8f76fd97\" returns successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.596098869Z" level=info msg="shim disconnected" id=b0869a3a0842569dd3ba1ed1dcfb33c28f6cc3fbdce48fc508f61131696a00f8
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.596149924Z" level=warning msg="cleaning up after shim disconnected" id=b0869a3a0842569dd3ba1ed1dcfb33c28f6cc3fbdce48fc508f61131696a00f8 namespace=k8s.io
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.596172720Z" level=info msg="cleaning up dead shim"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.604953444Z" level=warning msg="cleanup warnings time=\"2024-04-15T10:23:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9212 runtime=io.containerd.runc.v2\n"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.705349739Z" level=info msg="TearDown network for sandbox \"b0869a3a0842569dd3ba1ed1dcfb33c28f6cc3fbdce48fc508f61131696a00f8\" successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.705399088Z" level=info msg="StopPodSandbox for \"b0869a3a0842569dd3ba1ed1dcfb33c28f6cc3fbdce48fc508f61131696a00f8\" returns successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.724709742Z" level=info msg="RemoveContainer for \"063816ab8ddbec4ee3d803148bfb5d162f703098b7ae34e2c510b94f340e6959\""
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.730430670Z" level=info msg="RemoveContainer for \"063816ab8ddbec4ee3d803148bfb5d162f703098b7ae34e2c510b94f340e6959\" returns successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.730783017Z" level=error msg="ContainerStatus for \"063816ab8ddbec4ee3d803148bfb5d162f703098b7ae34e2c510b94f340e6959\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"063816ab8ddbec4ee3d803148bfb5d162f703098b7ae34e2c510b94f340e6959\": not found"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.731417260Z" level=info msg="StopPodSandbox for \"e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62\""
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.731493785Z" level=info msg="Container to stop \"d8d339ea0cf2f9004088a661c943425a6489440345d702eb6530e07e20b69afd\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.732559394Z" level=info msg="RemoveContainer for \"59485457770050f35e9393049f011bbadfe432df59d8bc175ae8cc44c0c3d224\""
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.738488561Z" level=info msg="RemoveContainer for \"59485457770050f35e9393049f011bbadfe432df59d8bc175ae8cc44c0c3d224\" returns successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.740181849Z" level=info msg="RemoveContainer for \"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c\""
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.746325518Z" level=info msg="RemoveContainer for \"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c\" returns successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.746954535Z" level=error msg="ContainerStatus for \"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c\": not found"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.758909983Z" level=info msg="shim disconnected" id=e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.758984572Z" level=warning msg="cleaning up after shim disconnected" id=e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62 namespace=k8s.io
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.759000873Z" level=info msg="cleaning up dead shim"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.767817637Z" level=warning msg="cleanup warnings time=\"2024-04-15T10:23:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9302 runtime=io.containerd.runc.v2\n"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.825438826Z" level=info msg="TearDown network for sandbox \"e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62\" successfully"
	Apr 15 10:23:03 addons-798865 containerd[806]: time="2024-04-15T10:23:03.825492256Z" level=info msg="StopPodSandbox for \"e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62\" returns successfully"
	
	
	==> coredns [a6da399c77c0886e739c5881f9288733b31621972320137cf45f00cc203ae1fa] <==
	[INFO] 10.244.0.2:44144 - 23811 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103844s
	[INFO] 10.244.0.2:52722 - 18679 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.003877717s
	[INFO] 10.244.0.2:52722 - 24816 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005895013s
	[INFO] 10.244.0.2:34878 - 46505 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004769301s
	[INFO] 10.244.0.2:34878 - 36266 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007376317s
	[INFO] 10.244.0.2:46406 - 8519 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004742474s
	[INFO] 10.244.0.2:46406 - 51266 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006668541s
	[INFO] 10.244.0.2:52821 - 44543 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006439s
	[INFO] 10.244.0.2:52821 - 16890 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117302s
	[INFO] 10.244.0.21:49768 - 30210 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000210748s
	[INFO] 10.244.0.21:41034 - 32901 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000258643s
	[INFO] 10.244.0.21:56780 - 29595 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000148678s
	[INFO] 10.244.0.21:46951 - 46298 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135245s
	[INFO] 10.244.0.21:35621 - 14330 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113501s
	[INFO] 10.244.0.21:40935 - 43104 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015918s
	[INFO] 10.244.0.21:33742 - 30320 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.00696332s
	[INFO] 10.244.0.21:39482 - 64275 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007039714s
	[INFO] 10.244.0.21:54054 - 6330 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006257499s
	[INFO] 10.244.0.21:59657 - 22877 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006604955s
	[INFO] 10.244.0.21:54580 - 25868 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005008117s
	[INFO] 10.244.0.21:50043 - 55446 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006150973s
	[INFO] 10.244.0.21:58711 - 3753 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000859714s
	[INFO] 10.244.0.21:49197 - 3645 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000727417s
	[INFO] 10.244.0.24:41988 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166904s
	[INFO] 10.244.0.24:39219 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158797s
	
	
	==> describe nodes <==
	Name:               addons-798865
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-798865
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c1d049bb371ec65637a9a4aa595cb21c815e116
	                    minikube.k8s.io/name=addons-798865
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T10_21_19_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-798865
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-798865"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 10:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-798865
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 10:23:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 10:22:50 +0000   Mon, 15 Apr 2024 10:21:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 10:22:50 +0000   Mon, 15 Apr 2024 10:21:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 10:22:50 +0000   Mon, 15 Apr 2024 10:21:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 10:22:50 +0000   Mon, 15 Apr 2024 10:21:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-798865
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859352Ki
	  pods:               110
	System Info:
	  Machine ID:                 76358476e34a4517888834a405c43785
	  System UUID:                c8452458-5c5b-420a-9492-4e0bcd1ce1e4
	  Boot ID:                    a8b9634b-19e8-4804-864e-0c9fcdebacd3
	  Kernel Version:             5.15.0-1055-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.31
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-47qwr                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  gcp-auth                    gcp-auth-7d69788767-fwq5r                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  ingress-nginx               ingress-nginx-controller-65496f9567-x6lbz    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         86s
	  kube-system                 coredns-76f75df574-wc54r                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     93s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 csi-hostpathplugin-4slgb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 etcd-addons-798865                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 kindnet-hl4pc                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      93s
	  kube-system                 kube-apiserver-addons-798865                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-addons-798865        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-c25mb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-scheduler-addons-798865                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 nvidia-device-plugin-daemonset-ldbwl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 snapshot-controller-58dbcc7b99-bsfcv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 snapshot-controller-58dbcc7b99-s2f56         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 tiller-deploy-7b677967b9-86zw7               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  local-path-storage          local-path-provisioner-78b46b4d5c-hwz6m      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-rlf8l               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             438Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s (x9 over 112s)  kubelet          Node addons-798865 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x7 over 112s)  kubelet          Node addons-798865 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x7 over 112s)  kubelet          Node addons-798865 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s                 kubelet          Node addons-798865 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s                 kubelet          Node addons-798865 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s                 kubelet          Node addons-798865 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             106s                 kubelet          Node addons-798865 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                106s                 kubelet          Node addons-798865 status is now: NodeReady
	  Normal  RegisteredNode           94s                  node-controller  Node addons-798865 event: Registered Node addons-798865 in Controller
	
	
	==> dmesg <==
	[  +0.002536]  #4
	[  +0.001563] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.002246] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002047] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002122]  #5
	[  +0.000839]  #6
	[  +0.003523]  #7
	[  +0.059166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.502162] i8042: Warning: Keylock active
	[  +0.007832] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004015] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000895] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000786] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000677] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001083] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000739] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000812] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.668604] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005965] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.012921] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002665] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015384] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.714111] kauditd_printk_skb: 37 callbacks suppressed
	
	
	==> etcd [9a17b8bce389bee865cf358885b94a32efe295295c6ebe1a1a1dcd937c15661d] <==
	{"level":"info","ts":"2024-04-15T10:21:13.980881Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T10:21:13.980956Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T10:21:13.980987Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T10:21:13.982386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T10:21:13.982644Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-15T10:21:49.590398Z","caller":"traceutil/trace.go:171","msg":"trace[1389665214] linearizableReadLoop","detail":"{readStateIndex:932; appliedIndex:931; }","duration":"115.501989ms","start":"2024-04-15T10:21:49.47488Z","end":"2024-04-15T10:21:49.590382Z","steps":["trace[1389665214] 'read index received'  (duration: 47.27812ms)","trace[1389665214] 'applied index is now lower than readState.Index'  (duration: 68.223362ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T10:21:49.590463Z","caller":"traceutil/trace.go:171","msg":"trace[1891504984] transaction","detail":"{read_only:false; response_revision:912; number_of_response:1; }","duration":"117.012944ms","start":"2024-04-15T10:21:49.47343Z","end":"2024-04-15T10:21:49.590443Z","steps":["trace[1891504984] 'process raft request'  (duration: 48.712679ms)","trace[1891504984] 'compare'  (duration: 68.167871ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T10:21:49.590727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.7848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:91406"}
	{"level":"info","ts":"2024-04-15T10:21:49.590792Z","caller":"traceutil/trace.go:171","msg":"trace[828254177] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:912; }","duration":"115.926738ms","start":"2024-04-15T10:21:49.474854Z","end":"2024-04-15T10:21:49.590781Z","steps":["trace[828254177] 'agreement among raft nodes before linearized reading'  (duration: 115.616051ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T10:21:49.665519Z","caller":"traceutil/trace.go:171","msg":"trace[647239084] transaction","detail":"{read_only:false; response_revision:913; number_of_response:1; }","duration":"103.50777ms","start":"2024-04-15T10:21:49.56199Z","end":"2024-04-15T10:21:49.665498Z","steps":["trace[647239084] 'process raft request'  (duration: 103.363333ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T10:21:49.665577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.361115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-04-15T10:21:49.66562Z","caller":"traceutil/trace.go:171","msg":"trace[258831007] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:913; }","duration":"103.444982ms","start":"2024-04-15T10:21:49.562163Z","end":"2024-04-15T10:21:49.665608Z","steps":["trace[258831007] 'agreement among raft nodes before linearized reading'  (duration: 103.284078ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T10:22:18.694371Z","caller":"traceutil/trace.go:171","msg":"trace[320473145] linearizableReadLoop","detail":"{readStateIndex:1084; appliedIndex:1082; }","duration":"207.576237ms","start":"2024-04-15T10:22:18.486775Z","end":"2024-04-15T10:22:18.694351Z","steps":["trace[320473145] 'read index received'  (duration: 62.617941ms)","trace[320473145] 'applied index is now lower than readState.Index'  (duration: 144.95759ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T10:22:18.694433Z","caller":"traceutil/trace.go:171","msg":"trace[486731272] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"215.499999ms","start":"2024-04-15T10:22:18.478914Z","end":"2024-04-15T10:22:18.694414Z","steps":["trace[486731272] 'process raft request'  (duration: 139.35303ms)","trace[486731272] 'compare'  (duration: 75.957868ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T10:22:18.694596Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.358752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-04-15T10:22:18.694597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.601252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11224"}
	{"level":"info","ts":"2024-04-15T10:22:18.694642Z","caller":"traceutil/trace.go:171","msg":"trace[1197126602] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1057; }","duration":"105.424783ms","start":"2024-04-15T10:22:18.589206Z","end":"2024-04-15T10:22:18.694631Z","steps":["trace[1197126602] 'agreement among raft nodes before linearized reading'  (duration: 105.339289ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T10:22:18.694646Z","caller":"traceutil/trace.go:171","msg":"trace[1191587234] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1057; }","duration":"132.674905ms","start":"2024-04-15T10:22:18.56196Z","end":"2024-04-15T10:22:18.694635Z","steps":["trace[1191587234] 'agreement among raft nodes before linearized reading'  (duration: 132.53058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T10:22:18.694743Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.722827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-15T10:22:18.694772Z","caller":"traceutil/trace.go:171","msg":"trace[819743362] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1057; }","duration":"208.017474ms","start":"2024-04-15T10:22:18.486744Z","end":"2024-04-15T10:22:18.694762Z","steps":["trace[819743362] 'agreement among raft nodes before linearized reading'  (duration: 207.709135ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T10:22:18.819352Z","caller":"traceutil/trace.go:171","msg":"trace[74372810] transaction","detail":"{read_only:false; response_revision:1058; number_of_response:1; }","duration":"121.269305ms","start":"2024-04-15T10:22:18.698068Z","end":"2024-04-15T10:22:18.819337Z","steps":["trace[74372810] 'process raft request'  (duration: 114.222228ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T10:22:18.827712Z","caller":"traceutil/trace.go:171","msg":"trace[376533745] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"124.804245ms","start":"2024-04-15T10:22:18.70288Z","end":"2024-04-15T10:22:18.827684Z","steps":["trace[376533745] 'process raft request'  (duration: 124.616164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T10:22:52.387227Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.433906ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128028525697405345 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" mod_revision:1295 > success:<request_delete_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" > > failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-04-15T10:22:52.387396Z","caller":"traceutil/trace.go:171","msg":"trace[138903431] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1298; }","duration":"205.801546ms","start":"2024-04-15T10:22:52.181575Z","end":"2024-04-15T10:22:52.387377Z","steps":["trace[138903431] 'process raft request'  (duration: 66.821908ms)","trace[138903431] 'compare'  (duration: 138.356844ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T10:22:52.754371Z","caller":"traceutil/trace.go:171","msg":"trace[1261551385] transaction","detail":"{read_only:false; response_revision:1299; number_of_response:1; }","duration":"102.939521ms","start":"2024-04-15T10:22:52.651415Z","end":"2024-04-15T10:22:52.754355Z","steps":["trace[1261551385] 'process raft request'  (duration: 102.843214ms)"],"step_count":1}
	
	
	==> gcp-auth [aa10914a3c6e380ce4a0376c0e8399ace20393b161510b0ab2bdb1034835846f] <==
	2024/04/15 10:22:48 GCP Auth Webhook started!
	2024/04/15 10:22:49 Ready to marshal response ...
	2024/04/15 10:22:49 Ready to write response ...
	2024/04/15 10:22:49 Ready to marshal response ...
	2024/04/15 10:22:49 Ready to write response ...
	2024/04/15 10:22:59 Ready to marshal response ...
	2024/04/15 10:22:59 Ready to write response ...
	2024/04/15 10:23:01 Ready to marshal response ...
	2024/04/15 10:23:01 Ready to write response ...
	
	
	==> kernel <==
	 10:23:05 up 5 min,  0 users,  load average: 1.62, 0.92, 0.38
	Linux addons-798865 5.15.0-1055-gcp #63~20.04.1-Ubuntu SMP Wed Mar 20 14:40:47 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [c7f3d5d25cf6fde4fb28ec9825bc46ce6d24fd026649aebf098243892db8703e] <==
	I0415 10:21:33.543174       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0415 10:21:33.543258       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0415 10:21:33.543380       1 main.go:116] setting mtu 1500 for CNI 
	I0415 10:21:33.543398       1 main.go:146] kindnetd IP family: "ipv4"
	I0415 10:21:33.543409       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0415 10:22:03.884789       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0415 10:22:03.891623       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0415 10:22:03.891650       1 main.go:227] handling current node
	I0415 10:22:13.905796       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0415 10:22:13.905819       1 main.go:227] handling current node
	I0415 10:22:23.917994       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0415 10:22:23.918023       1 main.go:227] handling current node
	I0415 10:22:33.939840       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0415 10:22:33.939864       1 main.go:227] handling current node
	I0415 10:22:43.944187       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0415 10:22:43.944216       1 main.go:227] handling current node
	I0415 10:22:53.956525       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0415 10:22:53.956556       1 main.go:227] handling current node
	I0415 10:23:03.967870       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0415 10:23:03.967904       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a35d887e071894d5a55d7551c764d30af018495ec0200e97a0d4b7590bf2b220] <==
	I0415 10:21:37.457026       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0415 10:21:37.456912       1 handler_proxy.go:93] no RequestInfo found in the context
	E0415 10:21:37.457065       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0415 10:21:37.461740       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0415 10:21:37.953819       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0415 10:21:38.549963       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.105.117.113"}
	I0415 10:21:38.648967       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.101.236.165"}
	I0415 10:21:38.744377       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0415 10:21:38.858888       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 10:21:38.858925       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 10:21:39.041363       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 10:21:39.041438       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 10:21:39.254790       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 10:21:39.254841       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 10:21:40.765679       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.111.27.88"}
	I0415 10:21:40.773850       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0415 10:21:40.947214       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.202.111"}
	I0415 10:21:41.895620       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.236.0"}
	E0415 10:22:11.473438       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.73:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.217.73:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.217.73:443: connect: connection refused
	W0415 10:22:11.473521       1 handler_proxy.go:93] no RequestInfo found in the context
	E0415 10:22:11.473582       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0415 10:22:11.475608       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.73:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.217.73:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.217.73:443: connect: connection refused
	E0415 10:22:11.479080       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.217.73:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.217.73:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.217.73:443: connect: connection refused
	I0415 10:22:11.567596       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [42350cad4065ff498a157177ab38190396a5b791be55f5df3712c88d416faac6] <==
	I0415 10:22:35.580222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="63.447µs"
	I0415 10:22:40.542639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.177654ms"
	I0415 10:22:40.542729       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="57.57µs"
	I0415 10:22:46.612046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="80.436µs"
	I0415 10:22:47.993022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="8.734541ms"
	I0415 10:22:47.993123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="50.143µs"
	I0415 10:22:48.625650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="6.043273ms"
	I0415 10:22:48.625731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="46.894µs"
	I0415 10:22:49.291926       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0415 10:22:49.303083       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 10:22:49.429487       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 10:22:49.429525       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 10:22:51.014561       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0415 10:22:51.036818       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0415 10:22:52.067224       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0415 10:22:52.387805       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0415 10:22:54.789435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-75d6c48ddd" duration="14.226µs"
	I0415 10:22:55.548236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="5.158749ms"
	I0415 10:22:55.548350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="68.638µs"
	I0415 10:23:00.418059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5446596998" duration="15.285µs"
	I0415 10:23:00.657422       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 10:23:02.527079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="12.673µs"
	I0415 10:23:03.064509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="8.316962ms"
	I0415 10:23:03.064658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="94.19µs"
	I0415 10:23:03.431112       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="12.524µs"
	
	
	==> kube-proxy [d78f2ce36a4295c8b97e3e9cbeef1c22d6bec44c3cd4677fd5010c67fb0f6962] <==
	I0415 10:21:33.743905       1 server_others.go:72] "Using iptables proxy"
	I0415 10:21:33.842340       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0415 10:21:34.351658       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 10:21:34.351692       1 server_others.go:168] "Using iptables Proxier"
	I0415 10:21:34.357774       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 10:21:34.357800       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 10:21:34.357837       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 10:21:34.358380       1 server.go:865] "Version info" version="v1.29.3"
	I0415 10:21:34.358645       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 10:21:34.437466       1 config.go:97] "Starting endpoint slice config controller"
	I0415 10:21:34.439656       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 10:21:34.439603       1 config.go:188] "Starting service config controller"
	I0415 10:21:34.439715       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 10:21:34.441630       1 config.go:315] "Starting node config controller"
	I0415 10:21:34.441667       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 10:21:34.540013       1 shared_informer.go:318] Caches are synced for service config
	I0415 10:21:34.540082       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 10:21:34.542436       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6b85b9a831da935984b506c74932042d811dafa6634cc64c49fe958e4be7fbfa] <==
	E0415 10:21:15.661508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 10:21:15.661524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 10:21:15.661336       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 10:21:15.661554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 10:21:15.661446       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 10:21:15.661615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 10:21:15.662087       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 10:21:15.662114       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 10:21:15.662666       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 10:21:15.662686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 10:21:16.469003       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 10:21:16.469035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 10:21:16.528425       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 10:21:16.528470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 10:21:16.539792       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 10:21:16.539824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 10:21:16.549134       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 10:21:16.549166       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 10:21:16.601807       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 10:21:16.601848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 10:21:16.707464       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 10:21:16.707507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 10:21:16.744711       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 10:21:16.744746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0415 10:21:18.259922       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.738750    1653 scope.go:117] "RemoveContainer" containerID="4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c"
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.740539    1653 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea92d092-8004-4e2e-8ae8-25103bf4b26f-kube-api-access-6wg95" (OuterVolumeSpecName: "kube-api-access-6wg95") pod "ea92d092-8004-4e2e-8ae8-25103bf4b26f" (UID: "ea92d092-8004-4e2e-8ae8-25103bf4b26f"). InnerVolumeSpecName "kube-api-access-6wg95". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.746612    1653 scope.go:117] "RemoveContainer" containerID="4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c"
	Apr 15 10:23:03 addons-798865 kubelet[1653]: E0415 10:23:03.747200    1653 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c\": not found" containerID="4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c"
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.747265    1653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c"} err="failed to get container status \"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b6e0fb4a196672febb38c95c6574f7953585464f6a414065b738e922ade2f3c\": not found"
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.839137    1653 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv674\" (UniqueName: \"kubernetes.io/projected/bb04e702-be5c-4700-ad5a-503fe20c5cb5-kube-api-access-lv674\") pod \"bb04e702-be5c-4700-ad5a-503fe20c5cb5\" (UID: \"bb04e702-be5c-4700-ad5a-503fe20c5cb5\") "
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.839268    1653 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qthkc\" (UniqueName: \"kubernetes.io/projected/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-kube-api-access-qthkc\") pod \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\" (UID: \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\") "
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.839355    1653 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6wg95\" (UniqueName: \"kubernetes.io/projected/ea92d092-8004-4e2e-8ae8-25103bf4b26f-kube-api-access-6wg95\") on node \"addons-798865\" DevicePath \"\""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.841376    1653 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb04e702-be5c-4700-ad5a-503fe20c5cb5-kube-api-access-lv674" (OuterVolumeSpecName: "kube-api-access-lv674") pod "bb04e702-be5c-4700-ad5a-503fe20c5cb5" (UID: "bb04e702-be5c-4700-ad5a-503fe20c5cb5"). InnerVolumeSpecName "kube-api-access-lv674". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.841452    1653 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-kube-api-access-qthkc" (OuterVolumeSpecName: "kube-api-access-qthkc") pod "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6" (UID: "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6"). InnerVolumeSpecName "kube-api-access-qthkc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.939984    1653 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-script\") pod \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\" (UID: \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\") "
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940063    1653 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-gcp-creds\") pod \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\" (UID: \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\") "
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940125    1653 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6" (UID: "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940159    1653 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-data\") pod \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\" (UID: \"c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6\") "
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940180    1653 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-data" (OuterVolumeSpecName: "data") pod "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6" (UID: "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940257    1653 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-gcp-creds\") on node \"addons-798865\" DevicePath \"\""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940275    1653 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qthkc\" (UniqueName: \"kubernetes.io/projected/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-kube-api-access-qthkc\") on node \"addons-798865\" DevicePath \"\""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940285    1653 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lv674\" (UniqueName: \"kubernetes.io/projected/bb04e702-be5c-4700-ad5a-503fe20c5cb5-kube-api-access-lv674\") on node \"addons-798865\" DevicePath \"\""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940297    1653 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-data\") on node \"addons-798865\" DevicePath \"\""
	Apr 15 10:23:03 addons-798865 kubelet[1653]: I0415 10:23:03.940380    1653 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-script" (OuterVolumeSpecName: "script") pod "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6" (UID: "c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Apr 15 10:23:04 addons-798865 kubelet[1653]: I0415 10:23:04.040758    1653 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c60fff2a-598f-4ad9-bcfe-8f5dd14f07f6-script\") on node \"addons-798865\" DevicePath \"\""
	Apr 15 10:23:04 addons-798865 kubelet[1653]: I0415 10:23:04.735033    1653 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32bf97748a9217e4ffc0acb10e7dd61027f37702ab6e5c0fae17bcb42cbde62"
	Apr 15 10:23:04 addons-798865 kubelet[1653]: I0415 10:23:04.860945    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9784744c-0f6a-4202-af2b-86d6344fd0da" path="/var/lib/kubelet/pods/9784744c-0f6a-4202-af2b-86d6344fd0da/volumes"
	Apr 15 10:23:04 addons-798865 kubelet[1653]: I0415 10:23:04.861285    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb04e702-be5c-4700-ad5a-503fe20c5cb5" path="/var/lib/kubelet/pods/bb04e702-be5c-4700-ad5a-503fe20c5cb5/volumes"
	Apr 15 10:23:04 addons-798865 kubelet[1653]: I0415 10:23:04.861576    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea92d092-8004-4e2e-8ae8-25103bf4b26f" path="/var/lib/kubelet/pods/ea92d092-8004-4e2e-8ae8-25103bf4b26f/volumes"
	
	
	==> storage-provisioner [f36612796bfc1e876de35a9ac0d7a41096fb35cb74981c0d2ed6896ef9ef5bff] <==
	I0415 10:21:38.369620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 10:21:38.445211       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 10:21:38.445255       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 10:21:38.453753       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 10:21:38.454688       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec8a5f17-ba25-4a45-a8f2-2f1dbe072c7b", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-798865_12cf1a21-49ad-42f5-92f8-f10ae7bb8457 became leader
	I0415 10:21:38.454849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-798865_12cf1a21-49ad-42f5-92f8-f10ae7bb8457!
	I0415 10:21:38.558498       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-798865_12cf1a21-49ad-42f5-92f8-f10ae7bb8457!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-798865 -n addons-798865
helpers_test.go:261: (dbg) Run:  kubectl --context addons-798865 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zkzfl ingress-nginx-admission-patch-6z7s7 helm-test helper-pod-delete-pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-798865 describe pod ingress-nginx-admission-create-zkzfl ingress-nginx-admission-patch-6z7s7 helm-test helper-pod-delete-pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-798865 describe pod ingress-nginx-admission-create-zkzfl ingress-nginx-admission-patch-6z7s7 helm-test helper-pod-delete-pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d: exit status 1 (59.264318ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zkzfl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6z7s7" not found
	Error from server (NotFound): pods "helm-test" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-798865 describe pod ingress-nginx-admission-create-zkzfl ingress-nginx-admission-patch-6z7s7 helm-test helper-pod-delete-pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (2.32s)

                                                
                                    

Test pass (308/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.41
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 7.45
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.2
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-rc.2/json-events 8.81
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.19
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
29 TestDownloadOnlyKic 1.09
30 TestBinaryMirror 0.71
31 TestOffline 60.84
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 127.86
38 TestAddons/parallel/Registry 14.36
39 TestAddons/parallel/Ingress 19.69
40 TestAddons/parallel/InspektorGadget 11.99
41 TestAddons/parallel/MetricsServer 5.81
42 TestAddons/parallel/HelmTiller 9.93
44 TestAddons/parallel/CSI 69.36
46 TestAddons/parallel/CloudSpanner 5.51
47 TestAddons/parallel/LocalPath 55.98
48 TestAddons/parallel/NvidiaDevicePlugin 6.5
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 12.17
54 TestCertOptions 23.85
55 TestCertExpiration 212.86
57 TestForceSystemdFlag 29.32
58 TestForceSystemdEnv 33.38
59 TestDockerEnvContainerd 40.27
60 TestKVMDriverInstallOrUpdate 3.64
64 TestErrorSpam/setup 23.02
65 TestErrorSpam/start 0.59
66 TestErrorSpam/status 0.88
67 TestErrorSpam/pause 1.49
68 TestErrorSpam/unpause 1.44
69 TestErrorSpam/stop 1.37
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 48.14
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.07
81 TestFunctional/serial/CacheCmd/cache/add_local 1.99
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 41.97
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.34
92 TestFunctional/serial/LogsFileCmd 1.36
93 TestFunctional/serial/InvalidService 4.06
95 TestFunctional/parallel/ConfigCmd 0.42
96 TestFunctional/parallel/DashboardCmd 6.55
97 TestFunctional/parallel/DryRun 0.38
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.92
103 TestFunctional/parallel/ServiceCmdConnect 10.7
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 35.13
107 TestFunctional/parallel/SSHCmd 0.53
108 TestFunctional/parallel/CpCmd 1.64
109 TestFunctional/parallel/MySQL 24.2
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.65
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
119 TestFunctional/parallel/License 0.23
120 TestFunctional/parallel/Version/short 0.07
121 TestFunctional/parallel/Version/components 0.69
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.25
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.47
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
134 TestFunctional/parallel/ImageCommands/ImageBuild 2.95
135 TestFunctional/parallel/ImageCommands/Setup 1.39
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.19
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.29
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.8
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/MountCmd/any-port 7.76
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.27
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.82
150 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
151 TestFunctional/parallel/MountCmd/specific-port 1.98
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
154 TestFunctional/parallel/ProfileCmd/profile_list 0.37
155 TestFunctional/parallel/ServiceCmd/List 0.91
156 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
157 TestFunctional/parallel/ServiceCmd/JSONOutput 0.91
158 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
159 TestFunctional/parallel/ServiceCmd/Format 0.61
160 TestFunctional/parallel/ServiceCmd/URL 0.71
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/StartCluster 109.99
168 TestMultiControlPlane/serial/DeployApp 16.79
169 TestMultiControlPlane/serial/PingHostFromPods 1.1
170 TestMultiControlPlane/serial/AddWorkerNode 18.93
171 TestMultiControlPlane/serial/NodeLabels 0.07
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.66
173 TestMultiControlPlane/serial/CopyFile 16.51
174 TestMultiControlPlane/serial/StopSecondaryNode 12.52
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
176 TestMultiControlPlane/serial/RestartSecondaryNode 15.42
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.65
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.12
179 TestMultiControlPlane/serial/DeleteSecondaryNode 9.89
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.47
181 TestMultiControlPlane/serial/StopCluster 35.56
182 TestMultiControlPlane/serial/RestartCluster 67.93
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.47
184 TestMultiControlPlane/serial/AddSecondaryNode 36.93
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
189 TestJSONOutput/start/Command 48.48
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.65
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.57
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.7
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 30.65
215 TestKicCustomNetwork/use_default_bridge_network 22.96
216 TestKicExistingNetwork 25.28
217 TestKicCustomSubnet 26.17
218 TestKicStaticIP 27.19
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 50.17
223 TestMountStart/serial/StartWithMountFirst 7.86
224 TestMountStart/serial/VerifyMountFirst 0.25
225 TestMountStart/serial/StartWithMountSecond 7.71
226 TestMountStart/serial/VerifyMountSecond 0.25
227 TestMountStart/serial/DeleteFirst 1.58
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.18
230 TestMountStart/serial/RestartStopped 6.84
231 TestMountStart/serial/VerifyMountPostStop 0.25
234 TestMultiNode/serial/FreshStart2Nodes 64.79
235 TestMultiNode/serial/DeployApp2Nodes 32.04
236 TestMultiNode/serial/PingHostFrom2Pods 0.78
237 TestMultiNode/serial/AddNode 18.14
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.3
240 TestMultiNode/serial/CopyFile 9.4
241 TestMultiNode/serial/StopNode 2.14
242 TestMultiNode/serial/StartAfterStop 8.64
243 TestMultiNode/serial/RestartKeepsNodes 78.31
244 TestMultiNode/serial/DeleteNode 5.08
245 TestMultiNode/serial/StopMultiNode 23.75
246 TestMultiNode/serial/RestartMultiNode 52.43
247 TestMultiNode/serial/ValidateNameConflict 22.16
252 TestPreload 106.45
254 TestScheduledStopUnix 100.27
257 TestInsufficientStorage 9.72
258 TestRunningBinaryUpgrade 68.44
260 TestKubernetesUpgrade 323.63
261 TestMissingContainerUpgrade 159.22
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/StartWithK8s 34.23
272 TestNetworkPlugins/group/false 7.87
276 TestStoppedBinaryUpgrade/Setup 0.96
277 TestStoppedBinaryUpgrade/Upgrade 173.71
278 TestNoKubernetes/serial/StartWithStopK8s 11.75
279 TestNoKubernetes/serial/Start 5.68
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
281 TestNoKubernetes/serial/ProfileList 0.91
282 TestNoKubernetes/serial/Stop 1.2
283 TestNoKubernetes/serial/StartNoArgs 6.22
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
294 TestPause/serial/Start 50.01
295 TestNetworkPlugins/group/auto/Start 48.45
296 TestNetworkPlugins/group/auto/KubeletFlags 0.27
297 TestNetworkPlugins/group/auto/NetCatPod 9.22
298 TestPause/serial/SecondStartNoReconfiguration 5.34
299 TestPause/serial/Pause 0.64
300 TestPause/serial/VerifyStatus 0.29
301 TestPause/serial/Unpause 0.6
302 TestPause/serial/PauseAgain 0.75
303 TestPause/serial/DeletePaused 2.46
304 TestNetworkPlugins/group/auto/DNS 0.12
305 TestNetworkPlugins/group/auto/Localhost 0.11
306 TestNetworkPlugins/group/auto/HairPin 0.11
307 TestPause/serial/VerifyDeletedResources 0.71
308 TestNetworkPlugins/group/kindnet/Start 49.88
309 TestNetworkPlugins/group/calico/Start 67.88
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
312 TestNetworkPlugins/group/kindnet/NetCatPod 8.25
313 TestNetworkPlugins/group/custom-flannel/Start 54.1
314 TestNetworkPlugins/group/kindnet/DNS 0.13
315 TestNetworkPlugins/group/kindnet/Localhost 0.12
316 TestNetworkPlugins/group/kindnet/HairPin 0.11
317 TestNetworkPlugins/group/enable-default-cni/Start 81.27
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.32
320 TestNetworkPlugins/group/calico/NetCatPod 9.2
321 TestNetworkPlugins/group/calico/DNS 0.15
322 TestNetworkPlugins/group/calico/Localhost 0.12
323 TestNetworkPlugins/group/calico/HairPin 0.12
324 TestNetworkPlugins/group/flannel/Start 55.59
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
327 TestNetworkPlugins/group/bridge/Start 54.04
328 TestNetworkPlugins/group/custom-flannel/DNS 0.18
329 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
330 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
332 TestStartStop/group/old-k8s-version/serial/FirstStart 140.49
333 TestNetworkPlugins/group/flannel/ControllerPod 6.01
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
335 TestNetworkPlugins/group/flannel/NetCatPod 8.21
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
338 TestNetworkPlugins/group/flannel/DNS 0.13
339 TestNetworkPlugins/group/flannel/Localhost 0.18
340 TestNetworkPlugins/group/flannel/HairPin 0.1
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
345 TestNetworkPlugins/group/bridge/NetCatPod 9.19
346 TestNetworkPlugins/group/bridge/DNS 0.17
347 TestNetworkPlugins/group/bridge/Localhost 0.11
348 TestNetworkPlugins/group/bridge/HairPin 0.18
350 TestStartStop/group/no-preload/serial/FirstStart 64.35
352 TestStartStop/group/embed-certs/serial/FirstStart 55.36
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.6
355 TestStartStop/group/embed-certs/serial/DeployApp 8.23
356 TestStartStop/group/no-preload/serial/DeployApp 8.24
357 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
358 TestStartStop/group/embed-certs/serial/Stop 11.92
359 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
360 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
361 TestStartStop/group/no-preload/serial/Stop 11.94
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
364 TestStartStop/group/embed-certs/serial/SecondStart 262.51
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
367 TestStartStop/group/no-preload/serial/SecondStart 263.44
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.11
370 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
371 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
372 TestStartStop/group/old-k8s-version/serial/Stop 12.48
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
374 TestStartStop/group/old-k8s-version/serial/SecondStart 120.28
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/old-k8s-version/serial/Pause 2.6
380 TestStartStop/group/newest-cni/serial/FirstStart 37.33
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
383 TestStartStop/group/newest-cni/serial/Stop 1.23
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/newest-cni/serial/SecondStart 13.43
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/newest-cni/serial/Pause 2.68
390 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
393 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
394 TestStartStop/group/embed-certs/serial/Pause 2.7
395 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
397 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
398 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
399 TestStartStop/group/no-preload/serial/Pause 2.63
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.66
x
+
TestDownloadOnly/v1.20.0/json-events (8.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-548433 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-548433 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.405131079s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-548433
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-548433: exit status 85 (72.075787ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-548433 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |          |
	|         | -p download-only-548433        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 10:20:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 10:20:13.197396   10331 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:20:13.197538   10331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:13.197550   10331 out.go:304] Setting ErrFile to fd 2...
	I0415 10:20:13.197557   10331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:13.197762   10331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	W0415 10:20:13.197899   10331 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18641-3502/.minikube/config/config.json: open /home/jenkins/minikube-integration/18641-3502/.minikube/config/config.json: no such file or directory
	I0415 10:20:13.198442   10331 out.go:298] Setting JSON to true
	I0415 10:20:13.199352   10331 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":164,"bootTime":1713176249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 10:20:13.199414   10331 start.go:139] virtualization: kvm guest
	I0415 10:20:13.201977   10331 out.go:97] [download-only-548433] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 10:20:13.203495   10331 out.go:169] MINIKUBE_LOCATION=18641
	W0415 10:20:13.202120   10331 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 10:20:13.202118   10331 notify.go:220] Checking for updates...
	I0415 10:20:13.206564   10331 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:20:13.208168   10331 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:20:13.209651   10331 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	I0415 10:20:13.211185   10331 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 10:20:13.214007   10331 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 10:20:13.214233   10331 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:20:13.234402   10331 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 10:20:13.234495   10331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:13.607320   10331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-15 10:20:13.59864754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:13.607425   10331 docker.go:295] overlay module found
	I0415 10:20:13.609479   10331 out.go:97] Using the docker driver based on user configuration
	I0415 10:20:13.609505   10331 start.go:297] selected driver: docker
	I0415 10:20:13.609510   10331 start.go:901] validating driver "docker" against <nil>
	I0415 10:20:13.609583   10331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:13.653334   10331 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-15 10:20:13.64516569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:13.653536   10331 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 10:20:13.654060   10331 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0415 10:20:13.654256   10331 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 10:20:13.656419   10331 out.go:169] Using Docker driver with root privileges
	I0415 10:20:13.658048   10331 cni.go:84] Creating CNI manager for ""
	I0415 10:20:13.658062   10331 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0415 10:20:13.658069   10331 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 10:20:13.658140   10331 start.go:340] cluster config:
	{Name:download-only-548433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-548433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:20:13.659544   10331 out.go:97] Starting "download-only-548433" primary control-plane node in "download-only-548433" cluster
	I0415 10:20:13.659563   10331 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0415 10:20:13.661104   10331 out.go:97] Pulling base image v0.0.43-1712854342-18621 ...
	I0415 10:20:13.661128   10331 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0415 10:20:13.661227   10331 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 10:20:13.675716   10331 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 10:20:13.675890   10331 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory
	I0415 10:20:13.675982   10331 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 10:20:13.688485   10331 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0415 10:20:13.688518   10331 cache.go:56] Caching tarball of preloaded images
	I0415 10:20:13.688654   10331 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0415 10:20:13.690766   10331 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 10:20:13.690799   10331 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0415 10:20:13.716932   10331 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-548433 host does not exist
	  To start a cluster, run: "minikube start -p download-only-548433"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-548433
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (7.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-119984 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-119984 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.448901055s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (7.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-119984
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-119984: exit status 85 (69.4457ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-548433 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | -p download-only-548433        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-548433        | download-only-548433 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-119984 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | -p download-only-119984        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 10:20:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 10:20:22.002956   10642 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:20:22.003109   10642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:22.003120   10642 out.go:304] Setting ErrFile to fd 2...
	I0415 10:20:22.003127   10642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:22.003327   10642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:20:22.003906   10642 out.go:298] Setting JSON to true
	I0415 10:20:22.004748   10642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":173,"bootTime":1713176249,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 10:20:22.004814   10642 start.go:139] virtualization: kvm guest
	I0415 10:20:22.007212   10642 out.go:97] [download-only-119984] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 10:20:22.008760   10642 out.go:169] MINIKUBE_LOCATION=18641
	I0415 10:20:22.007420   10642 notify.go:220] Checking for updates...
	I0415 10:20:22.012111   10642 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:20:22.013786   10642 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:20:22.015423   10642 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	I0415 10:20:22.017103   10642 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 10:20:22.019806   10642 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 10:20:22.020174   10642 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:20:22.041987   10642 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 10:20:22.042090   10642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:22.085477   10642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2024-04-15 10:20:22.077260438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:22.085582   10642 docker.go:295] overlay module found
	I0415 10:20:22.087368   10642 out.go:97] Using the docker driver based on user configuration
	I0415 10:20:22.087394   10642 start.go:297] selected driver: docker
	I0415 10:20:22.087399   10642 start.go:901] validating driver "docker" against <nil>
	I0415 10:20:22.087493   10642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:22.129303   10642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2024-04-15 10:20:22.120706421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:22.129482   10642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 10:20:22.130571   10642 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0415 10:20:22.130768   10642 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 10:20:22.132865   10642 out.go:169] Using Docker driver with root privileges
	I0415 10:20:22.134301   10642 cni.go:84] Creating CNI manager for ""
	I0415 10:20:22.134318   10642 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0415 10:20:22.134325   10642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 10:20:22.134390   10642 start.go:340] cluster config:
	{Name:download-only-119984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-119984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:20:22.135782   10642 out.go:97] Starting "download-only-119984" primary control-plane node in "download-only-119984" cluster
	I0415 10:20:22.135800   10642 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0415 10:20:22.136935   10642 out.go:97] Pulling base image v0.0.43-1712854342-18621 ...
	I0415 10:20:22.136964   10642 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 10:20:22.137022   10642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 10:20:22.153044   10642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 10:20:22.153166   10642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory
	I0415 10:20:22.153184   10642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory, skipping pull
	I0415 10:20:22.153190   10642 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in cache, skipping pull
	I0415 10:20:22.153204   10642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f as a tarball
	I0415 10:20:22.164009   10642 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0415 10:20:22.164034   10642 cache.go:56] Caching tarball of preloaded images
	I0415 10:20:22.164136   10642 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 10:20:22.166071   10642 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 10:20:22.166094   10642 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 ...
	I0415 10:20:22.189066   10642 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:dcad3363f354722395d68e96a1f5de54 -> /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-119984 host does not exist
	  To start a cluster, run: "minikube start -p download-only-119984"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-119984
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (8.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-695766 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-695766 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.812252389s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (8.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-695766
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-695766: exit status 85 (75.957876ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-548433 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | -p download-only-548433           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-548433           | download-only-548433 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| start   | -o=json --download-only           | download-only-119984 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | -p download-only-119984           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| delete  | -p download-only-119984           | download-only-119984 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC | 15 Apr 24 10:20 UTC |
	| start   | -o=json --download-only           | download-only-695766 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:20 UTC |                     |
	|         | -p download-only-695766           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 10:20:29
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 10:20:29.849986   10956 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:20:29.850505   10956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:29.850565   10956 out.go:304] Setting ErrFile to fd 2...
	I0415 10:20:29.850587   10956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:20:29.851083   10956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:20:29.852216   10956 out.go:298] Setting JSON to true
	I0415 10:20:29.853085   10956 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":181,"bootTime":1713176249,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 10:20:29.853153   10956 start.go:139] virtualization: kvm guest
	I0415 10:20:29.855365   10956 out.go:97] [download-only-695766] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 10:20:29.856769   10956 out.go:169] MINIKUBE_LOCATION=18641
	I0415 10:20:29.855494   10956 notify.go:220] Checking for updates...
	I0415 10:20:29.859242   10956 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:20:29.860485   10956 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:20:29.861759   10956 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	I0415 10:20:29.863067   10956 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 10:20:29.865440   10956 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 10:20:29.865658   10956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:20:29.886056   10956 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 10:20:29.886173   10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:29.928982   10956 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-15 10:20:29.919786154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:29.929086   10956 docker.go:295] overlay module found
	I0415 10:20:29.930956   10956 out.go:97] Using the docker driver based on user configuration
	I0415 10:20:29.930981   10956 start.go:297] selected driver: docker
	I0415 10:20:29.930988   10956 start.go:901] validating driver "docker" against <nil>
	I0415 10:20:29.931082   10956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:20:29.973447   10956 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-15 10:20:29.964935771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:20:29.973614   10956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 10:20:29.974075   10956 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0415 10:20:29.974210   10956 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 10:20:29.976155   10956 out.go:169] Using Docker driver with root privileges
	I0415 10:20:29.977669   10956 cni.go:84] Creating CNI manager for ""
	I0415 10:20:29.977691   10956 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0415 10:20:29.977699   10956 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 10:20:29.977768   10956 start.go:340] cluster config:
	{Name:download-only-695766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-695766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0415 10:20:29.979270   10956 out.go:97] Starting "download-only-695766" primary control-plane node in "download-only-695766" cluster
	I0415 10:20:29.979288   10956 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0415 10:20:29.980587   10956 out.go:97] Pulling base image v0.0.43-1712854342-18621 ...
	I0415 10:20:29.980613   10956 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0415 10:20:29.980710   10956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 10:20:29.994720   10956 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 10:20:29.994845   10956 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory
	I0415 10:20:29.994865   10956 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory, skipping pull
	I0415 10:20:29.994869   10956 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in cache, skipping pull
	I0415 10:20:29.994877   10956 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f as a tarball
	I0415 10:20:30.001735   10956 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0415 10:20:30.001764   10956 cache.go:56] Caching tarball of preloaded images
	I0415 10:20:30.001896   10956 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0415 10:20:30.003896   10956 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 10:20:30.003924   10956 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0415 10:20:30.029891   10956 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:dfcc3b0407e077e710ff902e47acd662 -> /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0415 10:20:33.563826   10956 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0415 10:20:33.563947   10956 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18641-3502/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0415 10:20:34.323764   10956 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on containerd
	I0415 10:20:34.324112   10956 profile.go:143] Saving config to /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/download-only-695766/config.json ...
	I0415 10:20:34.324143   10956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/download-only-695766/config.json: {Name:mk72bbcf1d4845e64e1516bb7ef1ff1372923fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:20:34.324303   10956 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0415 10:20:34.324426   10956 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18641-3502/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-695766 host does not exist
	  To start a cluster, run: "minikube start -p download-only-695766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-695766
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-519343 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-519343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-519343
--- PASS: TestDownloadOnlyKic (1.09s)

                                                
                                    
x
+
TestBinaryMirror (0.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-477246 --alsologtostderr --binary-mirror http://127.0.0.1:36477 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-477246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-477246
--- PASS: TestBinaryMirror (0.71s)

                                                
                                    
x
+
TestOffline (60.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-053040 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-053040 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (58.615219506s)
helpers_test.go:175: Cleaning up "offline-containerd-053040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-053040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-053040: (2.22423423s)
--- PASS: TestOffline (60.84s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-798865
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-798865: exit status 85 (61.179738ms)

                                                
                                                
-- stdout --
	* Profile "addons-798865" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-798865"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-798865
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-798865: exit status 85 (63.619092ms)

                                                
                                                
-- stdout --
	* Profile "addons-798865" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-798865"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (127.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-798865 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-798865 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m7.863152098s)
--- PASS: TestAddons/Setup (127.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.604102ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7nrz6" [ea92d092-8004-4e2e-8ae8-25103bf4b26f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006005957s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9fjsk" [bb04e702-be5c-4700-ad5a-503fe20c5cb5] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005198048s
addons_test.go:340: (dbg) Run:  kubectl --context addons-798865 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-798865 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-798865 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.604494117s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 ip
2024/04/15 10:23:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.36s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-798865 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-798865 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-798865 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [40abcb7d-37bb-492d-a597-1649276c2d5d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [40abcb7d-37bb-492d-a597-1649276c2d5d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003974872s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-798865 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-798865 addons disable ingress-dns --alsologtostderr -v=1: (1.69667059s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-798865 addons disable ingress --alsologtostderr -v=1: (7.67416708s)
--- PASS: TestAddons/parallel/Ingress (19.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-47qwr" [b4187425-e79c-4682-af56-dc5005486e07] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003721059s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-798865
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-798865: (5.9812834s)
--- PASS: TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 17.01302ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-cb5th" [30cef0c5-25a7-4dec-ad25-da36bcf2a50f] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006260799s
addons_test.go:415: (dbg) Run:  kubectl --context addons-798865 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.7873ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-86zw7" [c9230124-5d27-4cd5-bdcc-12793f06841e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004638198s
addons_test.go:473: (dbg) Run:  kubectl --context addons-798865 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-798865 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.381492134s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 17.58147ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-798865 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-798865 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8b3632c6-a1ee-4cf4-be66-b78d4cb69c3a] Pending
helpers_test.go:344: "task-pv-pod" [8b3632c6-a1ee-4cf4-be66-b78d4cb69c3a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8b3632c6-a1ee-4cf4-be66-b78d4cb69c3a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00297494s
addons_test.go:584: (dbg) Run:  kubectl --context addons-798865 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-798865 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-798865 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-798865 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-798865 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-798865 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-798865 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5dd03f12-2905-4e5d-863a-671bff55278e] Pending
helpers_test.go:344: "task-pv-pod-restore" [5dd03f12-2905-4e5d-863a-671bff55278e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5dd03f12-2905-4e5d-863a-671bff55278e] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003107483s
addons_test.go:626: (dbg) Run:  kubectl --context addons-798865 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-798865 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-798865 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-798865 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.60928886s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-lg8bx" [4cdaf385-dbc8-4e49-bb75-9e599c6bf0cd] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00356685s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-798865
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-798865 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-798865 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [445d9607-35b8-4c77-b89c-a8dbbb961fa4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [445d9607-35b8-4c77-b89c-a8dbbb961fa4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [445d9607-35b8-4c77-b89c-a8dbbb961fa4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002954522s
addons_test.go:891: (dbg) Run:  kubectl --context addons-798865 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 ssh "cat /opt/local-path-provisioner/pvc-d8d9427c-d9e8-49f7-9adf-4c90fbe6459d_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-798865 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-798865 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-798865 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-798865 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.106264519s)
--- PASS: TestAddons/parallel/LocalPath (55.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ldbwl" [089bfdb5-0cbf-430f-9fa8-e7cc07a01fc0] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004185713s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-798865
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-rlf8l" [bf80e270-220e-481c-963f-924f706644eb] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003778993s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-798865 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-798865 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-798865
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-798865: (11.891416357s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-798865
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-798865
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-798865
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (23.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-584961 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0415 10:52:49.138166   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-584961 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.272462473s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-584961 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-584961 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-584961 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-584961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-584961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-584961: (1.995723209s)
--- PASS: TestCertOptions (23.85s)

                                                
                                    
x
+
TestCertExpiration (212.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-578953 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-578953 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (25.240725632s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-578953 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-578953 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.283828302s)
helpers_test.go:175: Cleaning up "cert-expiration-578953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-578953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-578953: (2.33761446s)
--- PASS: TestCertExpiration (212.86s)

                                                
                                    
x
+
TestForceSystemdFlag (29.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-335576 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0415 10:52:36.443075   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-335576 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.745131695s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-335576 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-335576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-335576
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-335576: (2.273831975s)
--- PASS: TestForceSystemdFlag (29.32s)

                                                
                                    
x
+
TestForceSystemdEnv (33.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-133261 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-133261 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.262318156s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-133261 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-133261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-133261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-133261: (3.822715144s)
--- PASS: TestForceSystemdEnv (33.38s)

                                                
                                    
x
+
TestDockerEnvContainerd (40.27s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-458431 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-458431 --driver=docker  --container-runtime=containerd: (24.012076909s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-458431"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PBd14gNGIHOk/agent.33202" SSH_AGENT_PID="33203" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PBd14gNGIHOk/agent.33202" SSH_AGENT_PID="33203" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PBd14gNGIHOk/agent.33202" SSH_AGENT_PID="33203" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.022026063s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PBd14gNGIHOk/agent.33202" SSH_AGENT_PID="33203" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-458431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-458431
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-458431: (2.135596582s)
--- PASS: TestDockerEnvContainerd (40.27s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.64s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.64s)

                                                
                                    
x
+
TestErrorSpam/setup (23.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-517442 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-517442 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-517442 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-517442 --driver=docker  --container-runtime=containerd: (23.02279259s)
--- PASS: TestErrorSpam/setup (23.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 stop: (1.179863127s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517442 --log_dir /tmp/nospam-517442 stop
--- PASS: TestErrorSpam/stop (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18641-3502/.minikube/files/etc/test/nested/copy/10319/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-518142 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-518142 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.134867581s)
--- PASS: TestFunctional/serial/StartWithProxy (48.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-518142 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-518142 --alsologtostderr -v=8: (4.996624303s)
functional_test.go:659: soft start took 4.997310118s for "functional-518142" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-518142 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 cache add registry.k8s.io/pause:3.1: (1.002545242s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 cache add registry.k8s.io/pause:3.3: (1.067743985s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 cache add registry.k8s.io/pause:latest: (1.001298857s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-518142 /tmp/TestFunctionalserialCacheCmdcacheadd_local2850912586/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cache add minikube-local-cache-test:functional-518142
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 cache add minikube-local-cache-test:functional-518142: (1.573508411s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cache delete minikube-local-cache-test:functional-518142
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-518142
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.63822ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 kubectl -- --context functional-518142 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-518142 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-518142 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-518142 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.96562351s)
functional_test.go:757: restart took 41.965751717s for "functional-518142" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-518142 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 logs: (1.339539369s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 logs --file /tmp/TestFunctionalserialLogsFileCmd1323859268/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 logs --file /tmp/TestFunctionalserialLogsFileCmd1323859268/001/logs.txt: (1.360347055s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-518142 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-518142
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-518142: exit status 115 (332.728924ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30730 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-518142 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 config get cpus: exit status 14 (55.809074ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 config get cpus: exit status 14 (67.283203ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-518142 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-518142 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 56437: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-518142 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-518142 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (165.330084ms)

                                                
                                                
-- stdout --
	* [functional-518142] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:28:10.879229   55036 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:28:10.879382   55036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:28:10.879393   55036 out.go:304] Setting ErrFile to fd 2...
	I0415 10:28:10.879399   55036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:28:10.879763   55036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:28:10.880514   55036 out.go:298] Setting JSON to false
	I0415 10:28:10.881967   55036 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":642,"bootTime":1713176249,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 10:28:10.882059   55036 start.go:139] virtualization: kvm guest
	I0415 10:28:10.884531   55036 out.go:177] * [functional-518142] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 10:28:10.886177   55036 out.go:177]   - MINIKUBE_LOCATION=18641
	I0415 10:28:10.886224   55036 notify.go:220] Checking for updates...
	I0415 10:28:10.887610   55036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:28:10.889262   55036 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:28:10.890723   55036 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	I0415 10:28:10.892209   55036 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 10:28:10.893612   55036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 10:28:10.895290   55036 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:28:10.895779   55036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:28:10.918878   55036 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 10:28:10.919037   55036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:28:10.970315   55036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-04-15 10:28:10.960993446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:28:10.970439   55036 docker.go:295] overlay module found
	I0415 10:28:10.972518   55036 out.go:177] * Using the docker driver based on existing profile
	I0415 10:28:10.974144   55036 start.go:297] selected driver: docker
	I0415 10:28:10.974159   55036 start.go:901] validating driver "docker" against &{Name:functional-518142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-518142 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:28:10.974269   55036 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 10:28:10.977251   55036 out.go:177] 
	W0415 10:28:10.978715   55036 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 10:28:10.980207   55036 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-518142 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-518142 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-518142 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (161.720204ms)

                                                
                                                
-- stdout --
	* [functional-518142] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:28:09.798093   54566 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:28:09.798584   54566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:28:09.798604   54566 out.go:304] Setting ErrFile to fd 2...
	I0415 10:28:09.798613   54566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:28:09.799326   54566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:28:09.800607   54566 out.go:298] Setting JSON to false
	I0415 10:28:09.801643   54566 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":641,"bootTime":1713176249,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 10:28:09.801708   54566 start.go:139] virtualization: kvm guest
	I0415 10:28:09.803788   54566 out.go:177] * [functional-518142] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0415 10:28:09.805422   54566 out.go:177]   - MINIKUBE_LOCATION=18641
	I0415 10:28:09.806652   54566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:28:09.805491   54566 notify.go:220] Checking for updates...
	I0415 10:28:09.809164   54566 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:28:09.810689   54566 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	I0415 10:28:09.812235   54566 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 10:28:09.813881   54566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 10:28:09.815598   54566 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:28:09.816122   54566 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:28:09.836876   54566 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 10:28:09.837010   54566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:28:09.882609   54566 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-04-15 10:28:09.873803428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:28:09.882717   54566 docker.go:295] overlay module found
	I0415 10:28:09.884769   54566 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0415 10:28:09.886214   54566 start.go:297] selected driver: docker
	I0415 10:28:09.886226   54566 start.go:901] validating driver "docker" against &{Name:functional-518142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-518142 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:28:09.886306   54566 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 10:28:09.888557   54566 out.go:177] 
	W0415 10:28:09.890268   54566 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 10:28:09.891626   54566 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-518142 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-518142 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8q5w6" [06e217a0-2413-46ac-ba27-8513b3b8df70] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8q5w6" [06e217a0-2413-46ac-ba27-8513b3b8df70] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004239698s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30782
functional_test.go:1671: http://192.168.49.2:30782: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8q5w6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30782
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [171a63a9-080e-4136-a11a-7cf49051ac8a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004889969s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-518142 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-518142 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-518142 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-518142 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c819d3f8-91e7-4d32-9cc1-d39cb5649af1] Pending
helpers_test.go:344: "sp-pod" [c819d3f8-91e7-4d32-9cc1-d39cb5649af1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c819d3f8-91e7-4d32-9cc1-d39cb5649af1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.005204704s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-518142 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-518142 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-518142 delete -f testdata/storage-provisioner/pod.yaml: (1.241635694s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-518142 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [51b0dd15-360c-4c1e-8ef1-ad6cdbb19279] Pending
helpers_test.go:344: "sp-pod" [51b0dd15-360c-4c1e-8ef1-ad6cdbb19279] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [51b0dd15-360c-4c1e-8ef1-ad6cdbb19279] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004762146s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-518142 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh -n functional-518142 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cp functional-518142:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2668035232/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh -n functional-518142 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh -n functional-518142 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-518142 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5tvwn" [3e21422a-c8cd-457e-b7f0-df5ca3faaafc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5tvwn" [3e21422a-c8cd-457e-b7f0-df5ca3faaafc] Running
E0415 10:27:49.138411   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:27:49.144449   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:27:49.154771   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:27:49.175152   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:27:49.215545   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:27:49.296556   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:27:49.456979   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:27:49.778069   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004769981s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-518142 exec mysql-859648c796-5tvwn -- mysql -ppassword -e "show databases;"
E0415 10:27:54.260269   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-518142 exec mysql-859648c796-5tvwn -- mysql -ppassword -e "show databases;": exit status 1 (280.787958ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-518142 exec mysql-859648c796-5tvwn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-518142 exec mysql-859648c796-5tvwn -- mysql -ppassword -e "show databases;": exit status 1 (204.499726ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-518142 exec mysql-859648c796-5tvwn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-518142 exec mysql-859648c796-5tvwn -- mysql -ppassword -e "show databases;": exit status 1 (109.569534ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-518142 exec mysql-859648c796-5tvwn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10319/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo cat /etc/test/nested/copy/10319/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10319.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo cat /etc/ssl/certs/10319.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10319.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo cat /usr/share/ca-certificates/10319.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/103192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo cat /etc/ssl/certs/103192.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/103192.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo cat /usr/share/ca-certificates/103192.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-518142 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh "sudo systemctl is-active docker": exit status 1 (290.740422ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh "sudo systemctl is-active crio": exit status 1 (328.054711ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-518142 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-518142 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-518142 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 49711: os: process already finished
helpers_test.go:502: unable to terminate pid 49460: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-518142 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-518142 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-518142 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ebae484b-c2f2-4450-a5ba-cef245dd7d56] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ebae484b-c2f2-4450-a5ba-cef245dd7d56] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.004933779s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-518142 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-518142
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-518142
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-518142 image ls --format short --alsologtostderr:
I0415 10:28:12.117538   56030 out.go:291] Setting OutFile to fd 1 ...
I0415 10:28:12.123391   56030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.123553   56030 out.go:304] Setting ErrFile to fd 2...
I0415 10:28:12.123591   56030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.123987   56030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
I0415 10:28:12.124896   56030 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.125040   56030 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.125458   56030 cli_runner.go:164] Run: docker container inspect functional-518142 --format={{.State.Status}}
I0415 10:28:12.142910   56030 ssh_runner.go:195] Run: systemctl --version
I0415 10:28:12.142954   56030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-518142
I0415 10:28:12.161422   56030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/functional-518142/id_rsa Username:docker}
I0415 10:28:12.262626   56030 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-518142 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| docker.io/library/minikube-local-cache-test | functional-518142  | sha256:3a9a0e | 990B   |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | alpine             | sha256:e289a4 | 18MB   |
| docker.io/library/nginx                     | latest             | sha256:c613f1 | 70.5MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| gcr.io/google-containers/addon-resizer      | functional-518142  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-518142 image ls --format table --alsologtostderr:
I0415 10:28:13.030667   56436 out.go:291] Setting OutFile to fd 1 ...
I0415 10:28:13.030926   56436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:13.030936   56436 out.go:304] Setting ErrFile to fd 2...
I0415 10:28:13.030941   56436 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:13.031135   56436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
I0415 10:28:13.031681   56436 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:13.031777   56436 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:13.032211   56436 cli_runner.go:164] Run: docker container inspect functional-518142 --format={{.State.Status}}
I0415 10:28:13.054884   56436 ssh_runner.go:195] Run: systemctl --version
I0415 10:28:13.054948   56436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-518142
I0415 10:28:13.076087   56436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/functional-518142/id_rsa Username:docker}
I0415 10:28:13.177266   56436 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-518142 image ls --format json --alsologtostderr:
[{"id":"sha256:3a9a0e034144370ee1a6b755a8e4ed22924c05e955a8420d7fe6e47fc82b487f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-518142"],"size":"990"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:e6f181688
3972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":["docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1"],"repoTags":["docker.io/library/nginx:latest"],"size":"70542235"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-518142"],"size":"10823156"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50
ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17979767"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@
sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4
917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-518142 image ls --format json --alsologtostderr:
I0415 10:28:12.636818   56329 out.go:291] Setting OutFile to fd 1 ...
I0415 10:28:12.637249   56329 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.637291   56329 out.go:304] Setting ErrFile to fd 2...
I0415 10:28:12.637308   56329 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.637661   56329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
I0415 10:28:12.638421   56329 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.638574   56329 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.639128   56329 cli_runner.go:164] Run: docker container inspect functional-518142 --format={{.State.Status}}
I0415 10:28:12.658703   56329 ssh_runner.go:195] Run: systemctl --version
I0415 10:28:12.658758   56329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-518142
I0415 10:28:12.683843   56329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/functional-518142/id_rsa Username:docker}
I0415 10:28:12.841771   56329 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-518142 image ls --format yaml --alsologtostderr:
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-518142
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
repoTags:
- docker.io/library/nginx:alpine
size: "17979767"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:3a9a0e034144370ee1a6b755a8e4ed22924c05e955a8420d7fe6e47fc82b487f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-518142
size: "990"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"
- id: sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests:
- docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1
repoTags:
- docker.io/library/nginx:latest
size: "70542235"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-518142 image ls --format yaml --alsologtostderr:
I0415 10:28:12.378667   56180 out.go:291] Setting OutFile to fd 1 ...
I0415 10:28:12.379130   56180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.382191   56180 out.go:304] Setting ErrFile to fd 2...
I0415 10:28:12.382256   56180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.382683   56180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
I0415 10:28:12.383613   56180 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.383719   56180 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.384257   56180 cli_runner.go:164] Run: docker container inspect functional-518142 --format={{.State.Status}}
I0415 10:28:12.403167   56180 ssh_runner.go:195] Run: systemctl --version
I0415 10:28:12.403242   56180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-518142
I0415 10:28:12.423089   56180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/functional-518142/id_rsa Username:docker}
I0415 10:28:12.521151   56180 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh pgrep buildkitd: exit status 1 (294.261871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image build -t localhost/my-image:functional-518142 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 image build -t localhost/my-image:functional-518142 testdata/build --alsologtostderr: (2.421470601s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-518142 image build -t localhost/my-image:functional-518142 testdata/build --alsologtostderr:
I0415 10:28:12.441989   56227 out.go:291] Setting OutFile to fd 1 ...
I0415 10:28:12.442179   56227 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.442192   56227 out.go:304] Setting ErrFile to fd 2...
I0415 10:28:12.442198   56227 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:28:12.442486   56227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
I0415 10:28:12.443378   56227 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.444003   56227 config.go:182] Loaded profile config "functional-518142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 10:28:12.444454   56227 cli_runner.go:164] Run: docker container inspect functional-518142 --format={{.State.Status}}
I0415 10:28:12.463421   56227 ssh_runner.go:195] Run: systemctl --version
I0415 10:28:12.463477   56227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-518142
I0415 10:28:12.481326   56227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/functional-518142/id_rsa Username:docker}
I0415 10:28:12.586324   56227 build_images.go:161] Building image from path: /tmp/build.94186790.tar
I0415 10:28:12.586381   56227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 10:28:12.642669   56227 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.94186790.tar
I0415 10:28:12.650868   56227 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.94186790.tar: stat -c "%s %y" /var/lib/minikube/build/build.94186790.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.94186790.tar': No such file or directory
I0415 10:28:12.650909   56227 ssh_runner.go:362] scp /tmp/build.94186790.tar --> /var/lib/minikube/build/build.94186790.tar (3072 bytes)
I0415 10:28:12.742047   56227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.94186790
I0415 10:28:12.753322   56227 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.94186790 -xf /var/lib/minikube/build/build.94186790.tar
I0415 10:28:12.764473   56227 containerd.go:394] Building image: /var/lib/minikube/build/build.94186790
I0415 10:28:12.764567   56227 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.94186790 --local dockerfile=/var/lib/minikube/build/build.94186790 --output type=image,name=localhost/my-image:functional-518142
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:447a07d2b225cbd04e3d9a1c7786adfa5c34b82007735289ea88eec7b26df44c done
#8 exporting config sha256:810957ac53bbbd57d1a81b712780d21bd351aacf9982f32eaa66388edc7cad20 done
#8 naming to localhost/my-image:functional-518142 done
#8 DONE 0.1s
I0415 10:28:14.763377   56227 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.94186790 --local dockerfile=/var/lib/minikube/build/build.94186790 --output type=image,name=localhost/my-image:functional-518142: (1.998740303s)
I0415 10:28:14.763447   56227 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.94186790
I0415 10:28:14.773493   56227 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.94186790.tar
I0415 10:28:14.781579   56227 build_images.go:217] Built localhost/my-image:functional-518142 from /tmp/build.94186790.tar
I0415 10:28:14.781611   56227 build_images.go:133] succeeded building to: functional-518142
I0415 10:28:14.781616   56227 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls
2024/04/15 10:28:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.367048081s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-518142
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image load --daemon gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 image load --daemon gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr: (4.960237865s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image load --daemon gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 image load --daemon gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr: (3.982948715s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0415 10:27:50.419172   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.21249683s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-518142
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image load --daemon gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr
E0415 10:27:51.699639   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 image load --daemon gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr: (5.348222239s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image save gcr.io/google-containers/addon-resizer:functional-518142 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-518142 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.30.191 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-518142 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdany-port360851655/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713176877457812342" to /tmp/TestFunctionalparallelMountCmdany-port360851655/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713176877457812342" to /tmp/TestFunctionalparallelMountCmdany-port360851655/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713176877457812342" to /tmp/TestFunctionalparallelMountCmdany-port360851655/001/test-1713176877457812342
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.36825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 15 10:27 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 15 10:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 15 10:27 test-1713176877457812342
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh cat /mount-9p/test-1713176877457812342
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-518142 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d415d2c8-e966-4550-a094-1ab715574dcd] Pending
E0415 10:27:59.381157   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [d415d2c8-e966-4550-a094-1ab715574dcd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d415d2c8-e966-4550-a094-1ab715574dcd] Running
helpers_test.go:344: "busybox-mount" [d415d2c8-e966-4550-a094-1ab715574dcd] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d415d2c8-e966-4550-a094-1ab715574dcd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003502666s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-518142 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdany-port360851655/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image rm gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-518142 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.038513005s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-518142
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 image save --daemon gcr.io/google-containers/addon-resizer:functional-518142 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-518142
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-518142 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-518142 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-hp4ct" [c1fee055-f85a-42d3-ab36-c847aca87bec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-hp4ct" [c1fee055-f85a-42d3-ab36-c847aca87bec] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.007315135s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdspecific-port2713492954/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.857628ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdspecific-port2713492954/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh "sudo umount -f /mount-9p": exit status 1 (256.175676ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-518142 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdspecific-port2713492954/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3407509734/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3407509734/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3407509734/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T" /mount1: exit status 1 (309.745057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-518142 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3407509734/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3407509734/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-518142 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3407509734/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "302.658445ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "62.885885ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
E0415 10:28:09.621993   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
functional_test.go:1362: Took "322.106288ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "58.945443ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 service list -o json
functional_test.go:1490: Took "910.04578ms" to run "out/minikube-linux-amd64 -p functional-518142 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30660
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-518142 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30660
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.71s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-518142
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-518142
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-518142
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (109.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-005052 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0415 10:28:30.102552   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:29:11.071670   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-005052 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m49.295938785s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (109.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (16.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-005052 -- rollout status deployment/busybox: (14.704878407s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-dvwwm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-qwlpw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-stzvx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-dvwwm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-qwlpw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-stzvx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-dvwwm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-qwlpw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-stzvx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (16.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-dvwwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-dvwwm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-qwlpw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-qwlpw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-stzvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-005052 -- exec busybox-7fdf7869d9-stzvx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (18.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-005052 -v=7 --alsologtostderr
E0415 10:30:32.992322   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-005052 -v=7 --alsologtostderr: (18.088636309s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (18.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-005052 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp testdata/cp-test.txt ha-005052:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1707275885/001/cp-test_ha-005052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052:/home/docker/cp-test.txt ha-005052-m02:/home/docker/cp-test_ha-005052_ha-005052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test_ha-005052_ha-005052-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052:/home/docker/cp-test.txt ha-005052-m03:/home/docker/cp-test_ha-005052_ha-005052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test_ha-005052_ha-005052-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052:/home/docker/cp-test.txt ha-005052-m04:/home/docker/cp-test_ha-005052_ha-005052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test_ha-005052_ha-005052-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp testdata/cp-test.txt ha-005052-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1707275885/001/cp-test_ha-005052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m02:/home/docker/cp-test.txt ha-005052:/home/docker/cp-test_ha-005052-m02_ha-005052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test_ha-005052-m02_ha-005052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m02:/home/docker/cp-test.txt ha-005052-m03:/home/docker/cp-test_ha-005052-m02_ha-005052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test_ha-005052-m02_ha-005052-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m02:/home/docker/cp-test.txt ha-005052-m04:/home/docker/cp-test_ha-005052-m02_ha-005052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test_ha-005052-m02_ha-005052-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp testdata/cp-test.txt ha-005052-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1707275885/001/cp-test_ha-005052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m03:/home/docker/cp-test.txt ha-005052:/home/docker/cp-test_ha-005052-m03_ha-005052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test_ha-005052-m03_ha-005052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m03:/home/docker/cp-test.txt ha-005052-m02:/home/docker/cp-test_ha-005052-m03_ha-005052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test_ha-005052-m03_ha-005052-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m03:/home/docker/cp-test.txt ha-005052-m04:/home/docker/cp-test_ha-005052-m03_ha-005052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test_ha-005052-m03_ha-005052-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp testdata/cp-test.txt ha-005052-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1707275885/001/cp-test_ha-005052-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m04:/home/docker/cp-test.txt ha-005052:/home/docker/cp-test_ha-005052-m04_ha-005052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052 "sudo cat /home/docker/cp-test_ha-005052-m04_ha-005052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m04:/home/docker/cp-test.txt ha-005052-m02:/home/docker/cp-test_ha-005052-m04_ha-005052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m02 "sudo cat /home/docker/cp-test_ha-005052-m04_ha-005052-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 cp ha-005052-m04:/home/docker/cp-test.txt ha-005052-m03:/home/docker/cp-test_ha-005052-m04_ha-005052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 ssh -n ha-005052-m03 "sudo cat /home/docker/cp-test_ha-005052-m04_ha-005052-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-005052 node stop m02 -v=7 --alsologtostderr: (11.846792624s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr: exit status 7 (667.794598ms)

                                                
                                                
-- stdout --
	ha-005052
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-005052-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-005052-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-005052-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:31:16.834631   76685 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:31:16.834923   76685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:31:16.834934   76685 out.go:304] Setting ErrFile to fd 2...
	I0415 10:31:16.834939   76685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:31:16.835142   76685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:31:16.835325   76685 out.go:298] Setting JSON to false
	I0415 10:31:16.835353   76685 mustload.go:65] Loading cluster: ha-005052
	I0415 10:31:16.835534   76685 notify.go:220] Checking for updates...
	I0415 10:31:16.835707   76685 config.go:182] Loaded profile config "ha-005052": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:31:16.835721   76685 status.go:255] checking status of ha-005052 ...
	I0415 10:31:16.836110   76685 cli_runner.go:164] Run: docker container inspect ha-005052 --format={{.State.Status}}
	I0415 10:31:16.854004   76685 status.go:330] ha-005052 host status = "Running" (err=<nil>)
	I0415 10:31:16.854027   76685 host.go:66] Checking if "ha-005052" exists ...
	I0415 10:31:16.854259   76685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-005052
	I0415 10:31:16.870906   76685 host.go:66] Checking if "ha-005052" exists ...
	I0415 10:31:16.871192   76685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:31:16.871237   76685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-005052
	I0415 10:31:16.887979   76685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/ha-005052/id_rsa Username:docker}
	I0415 10:31:16.981668   76685 ssh_runner.go:195] Run: systemctl --version
	I0415 10:31:16.985749   76685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:31:16.996548   76685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:31:17.047287   76685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:73 SystemTime:2024-04-15 10:31:17.036871837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:31:17.048718   76685 kubeconfig.go:125] found "ha-005052" server: "https://192.168.49.254:8443"
	I0415 10:31:17.048754   76685 api_server.go:166] Checking apiserver status ...
	I0415 10:31:17.048799   76685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 10:31:17.059543   76685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1564/cgroup
	I0415 10:31:17.068607   76685 api_server.go:182] apiserver freezer: "12:freezer:/docker/4a63235113ce68ac7c5e9550340042b54bb4f63444a8a8613c6b6b7e9ae27e0e/kubepods/burstable/pod25f3367bb5eda0982d629e59b6bd7da7/2393ab47b2b788676692768eb92d22447cf8c2309a8c9110aa7c7974c08b52b0"
	I0415 10:31:17.068692   76685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4a63235113ce68ac7c5e9550340042b54bb4f63444a8a8613c6b6b7e9ae27e0e/kubepods/burstable/pod25f3367bb5eda0982d629e59b6bd7da7/2393ab47b2b788676692768eb92d22447cf8c2309a8c9110aa7c7974c08b52b0/freezer.state
	I0415 10:31:17.076540   76685 api_server.go:204] freezer state: "THAWED"
	I0415 10:31:17.076720   76685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0415 10:31:17.080459   76685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0415 10:31:17.080493   76685 status.go:422] ha-005052 apiserver status = Running (err=<nil>)
	I0415 10:31:17.080508   76685 status.go:257] ha-005052 status: &{Name:ha-005052 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:31:17.080550   76685 status.go:255] checking status of ha-005052-m02 ...
	I0415 10:31:17.080800   76685 cli_runner.go:164] Run: docker container inspect ha-005052-m02 --format={{.State.Status}}
	I0415 10:31:17.099247   76685 status.go:330] ha-005052-m02 host status = "Stopped" (err=<nil>)
	I0415 10:31:17.099268   76685 status.go:343] host is not running, skipping remaining checks
	I0415 10:31:17.099274   76685 status.go:257] ha-005052-m02 status: &{Name:ha-005052-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:31:17.099291   76685 status.go:255] checking status of ha-005052-m03 ...
	I0415 10:31:17.099517   76685 cli_runner.go:164] Run: docker container inspect ha-005052-m03 --format={{.State.Status}}
	I0415 10:31:17.115932   76685 status.go:330] ha-005052-m03 host status = "Running" (err=<nil>)
	I0415 10:31:17.115961   76685 host.go:66] Checking if "ha-005052-m03" exists ...
	I0415 10:31:17.116262   76685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-005052-m03
	I0415 10:31:17.133748   76685 host.go:66] Checking if "ha-005052-m03" exists ...
	I0415 10:31:17.134061   76685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:31:17.134167   76685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-005052-m03
	I0415 10:31:17.151253   76685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/ha-005052-m03/id_rsa Username:docker}
	I0415 10:31:17.245969   76685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:31:17.257464   76685 kubeconfig.go:125] found "ha-005052" server: "https://192.168.49.254:8443"
	I0415 10:31:17.257492   76685 api_server.go:166] Checking apiserver status ...
	I0415 10:31:17.257520   76685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 10:31:17.268517   76685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	I0415 10:31:17.277931   76685 api_server.go:182] apiserver freezer: "12:freezer:/docker/11b4b7d58917c924a1e497b596c7342e764e586227ba7c5957de3da9c70ee24e/kubepods/burstable/pod076046d84372f0232ca8bb14855bc558/9818c5420e5fbd0c74003d9ea7c483e5617ccc5271fb8c0152867607b5f42588"
	I0415 10:31:17.277989   76685 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/11b4b7d58917c924a1e497b596c7342e764e586227ba7c5957de3da9c70ee24e/kubepods/burstable/pod076046d84372f0232ca8bb14855bc558/9818c5420e5fbd0c74003d9ea7c483e5617ccc5271fb8c0152867607b5f42588/freezer.state
	I0415 10:31:17.285976   76685 api_server.go:204] freezer state: "THAWED"
	I0415 10:31:17.286009   76685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0415 10:31:17.290028   76685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0415 10:31:17.290051   76685 status.go:422] ha-005052-m03 apiserver status = Running (err=<nil>)
	I0415 10:31:17.290060   76685 status.go:257] ha-005052-m03 status: &{Name:ha-005052-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:31:17.290074   76685 status.go:255] checking status of ha-005052-m04 ...
	I0415 10:31:17.290303   76685 cli_runner.go:164] Run: docker container inspect ha-005052-m04 --format={{.State.Status}}
	I0415 10:31:17.306066   76685 status.go:330] ha-005052-m04 host status = "Running" (err=<nil>)
	I0415 10:31:17.306095   76685 host.go:66] Checking if "ha-005052-m04" exists ...
	I0415 10:31:17.306332   76685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-005052-m04
	I0415 10:31:17.322998   76685 host.go:66] Checking if "ha-005052-m04" exists ...
	I0415 10:31:17.323233   76685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:31:17.323269   76685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-005052-m04
	I0415 10:31:17.340872   76685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/ha-005052-m04/id_rsa Username:docker}
	I0415 10:31:17.433509   76685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:31:17.444425   76685 status.go:257] ha-005052-m04 status: &{Name:ha-005052-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-005052 node start m02 -v=7 --alsologtostderr: (14.480308436s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-005052 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-005052 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-005052 -v=7 --alsologtostderr: (36.738549849s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-005052 --wait=true -v=7 --alsologtostderr
E0415 10:32:36.442924   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:36.448227   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:36.458514   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:36.478760   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:36.519069   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:36.599491   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:36.759875   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:37.080560   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:37.721498   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:39.002587   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:41.563629   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:46.684617   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:32:49.138139   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:32:56.924910   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-005052 --wait=true -v=7 --alsologtostderr: (1m2.25890893s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-005052
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 node delete m03 -v=7 --alsologtostderr
E0415 10:33:16.833006   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:33:17.405775   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-005052 node delete m03 -v=7 --alsologtostderr: (9.12080946s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 stop -v=7 --alsologtostderr
E0415 10:33:58.367341   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-005052 stop -v=7 --alsologtostderr: (35.449564479s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr: exit status 7 (105.791011ms)

                                                
                                                
-- stdout --
	ha-005052
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-005052-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-005052-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:33:58.988611   92654 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:33:58.988716   92654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:33:58.988725   92654 out.go:304] Setting ErrFile to fd 2...
	I0415 10:33:58.988729   92654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:33:58.988956   92654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:33:58.989116   92654 out.go:298] Setting JSON to false
	I0415 10:33:58.989142   92654 mustload.go:65] Loading cluster: ha-005052
	I0415 10:33:58.989244   92654 notify.go:220] Checking for updates...
	I0415 10:33:58.989501   92654 config.go:182] Loaded profile config "ha-005052": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:33:58.989515   92654 status.go:255] checking status of ha-005052 ...
	I0415 10:33:58.989897   92654 cli_runner.go:164] Run: docker container inspect ha-005052 --format={{.State.Status}}
	I0415 10:33:59.007335   92654 status.go:330] ha-005052 host status = "Stopped" (err=<nil>)
	I0415 10:33:59.007364   92654 status.go:343] host is not running, skipping remaining checks
	I0415 10:33:59.007374   92654 status.go:257] ha-005052 status: &{Name:ha-005052 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:33:59.007401   92654 status.go:255] checking status of ha-005052-m02 ...
	I0415 10:33:59.007756   92654 cli_runner.go:164] Run: docker container inspect ha-005052-m02 --format={{.State.Status}}
	I0415 10:33:59.023958   92654 status.go:330] ha-005052-m02 host status = "Stopped" (err=<nil>)
	I0415 10:33:59.023993   92654 status.go:343] host is not running, skipping remaining checks
	I0415 10:33:59.023999   92654 status.go:257] ha-005052-m02 status: &{Name:ha-005052-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:33:59.024017   92654 status.go:255] checking status of ha-005052-m04 ...
	I0415 10:33:59.024233   92654 cli_runner.go:164] Run: docker container inspect ha-005052-m04 --format={{.State.Status}}
	I0415 10:33:59.039850   92654 status.go:330] ha-005052-m04 host status = "Stopped" (err=<nil>)
	I0415 10:33:59.039883   92654 status.go:343] host is not running, skipping remaining checks
	I0415 10:33:59.039891   92654 status.go:257] ha-005052-m04 status: &{Name:ha-005052-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-005052 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-005052 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.175278991s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-005052 --control-plane -v=7 --alsologtostderr
E0415 10:35:20.288465   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-005052 --control-plane -v=7 --alsologtostderr: (36.089026969s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-005052 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-396723 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-396723 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (48.476802053s)
--- PASS: TestJSONOutput/start/Command (48.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-396723 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-396723 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-396723 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-396723 --output=json --user=testUser: (5.699831945s)
--- PASS: TestJSONOutput/stop/Command (5.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-672409 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-672409 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.624219ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2e626148-0225-4b37-875d-2666c8e8c0c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-672409] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7806665-4fb6-42e9-b3da-1bbd796faed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18641"}}
	{"specversion":"1.0","id":"a13b5af1-58d5-4a66-9527-04b575d880c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"baf374aa-17d6-4a64-bef3-026f599b4ad2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig"}}
	{"specversion":"1.0","id":"91b6543d-8cf4-43ee-a1fe-2dcb39104ccd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube"}}
	{"specversion":"1.0","id":"fd9de755-f52b-4b7d-b0be-59820d79d442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"511857a8-07ec-43de-a75e-0670d5ec83ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ea6c4272-36f0-41e9-bcaf-8ffa134c3ab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-672409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-672409
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-566690 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-566690 --network=: (28.664809067s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-566690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-566690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-566690: (1.968145958s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.65s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-137500 --network=bridge
E0415 10:37:36.442589   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-137500 --network=bridge: (21.022192468s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-137500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-137500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-137500: (1.92480853s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.96s)

                                                
                                    
x
+
TestKicExistingNetwork (25.28s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-467452 --network=existing-network
E0415 10:37:49.138858   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 10:38:04.128771   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-467452 --network=existing-network: (23.346798486s)
helpers_test.go:175: Cleaning up "existing-network-467452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-467452
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-467452: (1.807664773s)
--- PASS: TestKicExistingNetwork (25.28s)

                                                
                                    
x
+
TestKicCustomSubnet (26.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-331558 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-331558 --subnet=192.168.60.0/24: (24.160781989s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-331558 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-331558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-331558
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-331558: (1.99082726s)
--- PASS: TestKicCustomSubnet (26.17s)

                                                
                                    
x
+
TestKicStaticIP (27.19s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-446555 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-446555 --static-ip=192.168.200.200: (25.003711063s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-446555 ip
helpers_test.go:175: Cleaning up "static-ip-446555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-446555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-446555: (2.05552286s)
--- PASS: TestKicStaticIP (27.19s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-343108 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-343108 --driver=docker  --container-runtime=containerd: (24.447150728s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-346178 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-346178 --driver=docker  --container-runtime=containerd: (20.680206789s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-343108
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-346178
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-346178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-346178
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-346178: (1.826389139s)
helpers_test.go:175: Cleaning up "first-343108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-343108
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-343108: (2.11925992s)
--- PASS: TestMinikubeProfile (50.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-482374 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-482374 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.864617723s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-482374 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-496572 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-496572 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.705365425s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-496572 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-482374 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-482374 --alsologtostderr -v=5: (1.580283328s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-496572 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-496572
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-496572: (1.184582888s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-496572
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-496572: (5.839911044s)
--- PASS: TestMountStart/serial/RestartStopped (6.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-496572 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337742 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337742 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.335750001s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (32.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-337742 -- rollout status deployment/busybox: (30.414048772s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-9wkd6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-qvtc4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-9wkd6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-qvtc4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-9wkd6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-qvtc4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (32.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-9wkd6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-9wkd6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-qvtc4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337742 -- exec busybox-7fdf7869d9-qvtc4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-337742 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-337742 -v 3 --alsologtostderr: (17.523593848s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-337742 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp testdata/cp-test.txt multinode-337742:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile103127416/001/cp-test_multinode-337742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742:/home/docker/cp-test.txt multinode-337742-m02:/home/docker/cp-test_multinode-337742_multinode-337742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m02 "sudo cat /home/docker/cp-test_multinode-337742_multinode-337742-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742:/home/docker/cp-test.txt multinode-337742-m03:/home/docker/cp-test_multinode-337742_multinode-337742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m03 "sudo cat /home/docker/cp-test_multinode-337742_multinode-337742-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp testdata/cp-test.txt multinode-337742-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile103127416/001/cp-test_multinode-337742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742-m02:/home/docker/cp-test.txt multinode-337742:/home/docker/cp-test_multinode-337742-m02_multinode-337742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742 "sudo cat /home/docker/cp-test_multinode-337742-m02_multinode-337742.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742-m02:/home/docker/cp-test.txt multinode-337742-m03:/home/docker/cp-test_multinode-337742-m02_multinode-337742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m03 "sudo cat /home/docker/cp-test_multinode-337742-m02_multinode-337742-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp testdata/cp-test.txt multinode-337742-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile103127416/001/cp-test_multinode-337742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742-m03:/home/docker/cp-test.txt multinode-337742:/home/docker/cp-test_multinode-337742-m03_multinode-337742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742 "sudo cat /home/docker/cp-test_multinode-337742-m03_multinode-337742.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 cp multinode-337742-m03:/home/docker/cp-test.txt multinode-337742-m02:/home/docker/cp-test_multinode-337742-m03_multinode-337742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 ssh -n multinode-337742-m02 "sudo cat /home/docker/cp-test_multinode-337742-m03_multinode-337742-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-337742 node stop m03: (1.191086593s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337742 status: exit status 7 (473.84419ms)

                                                
                                                
-- stdout --
	multinode-337742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-337742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-337742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr: exit status 7 (478.556789ms)

                                                
                                                
-- stdout --
	multinode-337742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-337742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-337742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:42:30.911682  154149 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:42:30.911786  154149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:42:30.911792  154149 out.go:304] Setting ErrFile to fd 2...
	I0415 10:42:30.911796  154149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:42:30.911998  154149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:42:30.912178  154149 out.go:298] Setting JSON to false
	I0415 10:42:30.912205  154149 mustload.go:65] Loading cluster: multinode-337742
	I0415 10:42:30.912316  154149 notify.go:220] Checking for updates...
	I0415 10:42:30.912599  154149 config.go:182] Loaded profile config "multinode-337742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:42:30.912620  154149 status.go:255] checking status of multinode-337742 ...
	I0415 10:42:30.913008  154149 cli_runner.go:164] Run: docker container inspect multinode-337742 --format={{.State.Status}}
	I0415 10:42:30.930154  154149 status.go:330] multinode-337742 host status = "Running" (err=<nil>)
	I0415 10:42:30.930199  154149 host.go:66] Checking if "multinode-337742" exists ...
	I0415 10:42:30.930473  154149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-337742
	I0415 10:42:30.947947  154149 host.go:66] Checking if "multinode-337742" exists ...
	I0415 10:42:30.948352  154149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:42:30.948413  154149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-337742
	I0415 10:42:30.966118  154149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32912 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/multinode-337742/id_rsa Username:docker}
	I0415 10:42:31.061619  154149 ssh_runner.go:195] Run: systemctl --version
	I0415 10:42:31.065516  154149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:42:31.076331  154149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:42:31.125754  154149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:63 SystemTime:2024-04-15 10:42:31.116423764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:42:31.127587  154149 kubeconfig.go:125] found "multinode-337742" server: "https://192.168.67.2:8443"
	I0415 10:42:31.127623  154149 api_server.go:166] Checking apiserver status ...
	I0415 10:42:31.127673  154149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 10:42:31.138509  154149 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1581/cgroup
	I0415 10:42:31.147639  154149 api_server.go:182] apiserver freezer: "12:freezer:/docker/ae9bef97d5304ac04be17d68f74dbdc49c2b8ebf81ecda7561d50a0ea142912a/kubepods/burstable/podaac506dc6ff56d0f0324b0734cac1681/b90b1eba76981f7bf993079c3862792371cfd34fb2d66788f26516cfe2a442a5"
	I0415 10:42:31.147703  154149 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ae9bef97d5304ac04be17d68f74dbdc49c2b8ebf81ecda7561d50a0ea142912a/kubepods/burstable/podaac506dc6ff56d0f0324b0734cac1681/b90b1eba76981f7bf993079c3862792371cfd34fb2d66788f26516cfe2a442a5/freezer.state
	I0415 10:42:31.155785  154149 api_server.go:204] freezer state: "THAWED"
	I0415 10:42:31.155816  154149 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0415 10:42:31.159371  154149 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0415 10:42:31.159396  154149 status.go:422] multinode-337742 apiserver status = Running (err=<nil>)
	I0415 10:42:31.159406  154149 status.go:257] multinode-337742 status: &{Name:multinode-337742 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:42:31.159427  154149 status.go:255] checking status of multinode-337742-m02 ...
	I0415 10:42:31.159725  154149 cli_runner.go:164] Run: docker container inspect multinode-337742-m02 --format={{.State.Status}}
	I0415 10:42:31.176450  154149 status.go:330] multinode-337742-m02 host status = "Running" (err=<nil>)
	I0415 10:42:31.176494  154149 host.go:66] Checking if "multinode-337742-m02" exists ...
	I0415 10:42:31.176797  154149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-337742-m02
	I0415 10:42:31.193506  154149 host.go:66] Checking if "multinode-337742-m02" exists ...
	I0415 10:42:31.193788  154149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:42:31.193824  154149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-337742-m02
	I0415 10:42:31.211086  154149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/18641-3502/.minikube/machines/multinode-337742-m02/id_rsa Username:docker}
	I0415 10:42:31.305328  154149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:42:31.316362  154149 status.go:257] multinode-337742-m02 status: &{Name:multinode-337742-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:42:31.316446  154149 status.go:255] checking status of multinode-337742-m03 ...
	I0415 10:42:31.316780  154149 cli_runner.go:164] Run: docker container inspect multinode-337742-m03 --format={{.State.Status}}
	I0415 10:42:31.333401  154149 status.go:330] multinode-337742-m03 host status = "Stopped" (err=<nil>)
	I0415 10:42:31.333438  154149 status.go:343] host is not running, skipping remaining checks
	I0415 10:42:31.333455  154149 status.go:257] multinode-337742-m03 status: &{Name:multinode-337742-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 node start m03 -v=7 --alsologtostderr
E0415 10:42:36.442901   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-337742 node start m03 -v=7 --alsologtostderr: (7.97098106s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-337742
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-337742
E0415 10:42:49.139888   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-337742: (24.694932921s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337742 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337742 --wait=true -v=8 --alsologtostderr: (53.50063279s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-337742
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-337742 node delete m03: (4.500323179s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 stop
E0415 10:44:12.193616   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-337742 stop: (23.561605325s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337742 status: exit status 7 (96.362595ms)

                                                
                                                
-- stdout --
	multinode-337742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-337742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr: exit status 7 (90.568716ms)

                                                
                                                
-- stdout --
	multinode-337742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-337742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:44:27.079044  163418 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:44:27.079160  163418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:44:27.079170  163418 out.go:304] Setting ErrFile to fd 2...
	I0415 10:44:27.079173  163418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:44:27.079400  163418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:44:27.079618  163418 out.go:298] Setting JSON to false
	I0415 10:44:27.079645  163418 mustload.go:65] Loading cluster: multinode-337742
	I0415 10:44:27.079755  163418 notify.go:220] Checking for updates...
	I0415 10:44:27.079998  163418 config.go:182] Loaded profile config "multinode-337742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:44:27.080017  163418 status.go:255] checking status of multinode-337742 ...
	I0415 10:44:27.080397  163418 cli_runner.go:164] Run: docker container inspect multinode-337742 --format={{.State.Status}}
	I0415 10:44:27.097522  163418 status.go:330] multinode-337742 host status = "Stopped" (err=<nil>)
	I0415 10:44:27.097544  163418 status.go:343] host is not running, skipping remaining checks
	I0415 10:44:27.097551  163418 status.go:257] multinode-337742 status: &{Name:multinode-337742 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:44:27.097587  163418 status.go:255] checking status of multinode-337742-m02 ...
	I0415 10:44:27.097823  163418 cli_runner.go:164] Run: docker container inspect multinode-337742-m02 --format={{.State.Status}}
	I0415 10:44:27.114188  163418 status.go:330] multinode-337742-m02 host status = "Stopped" (err=<nil>)
	I0415 10:44:27.114213  163418 status.go:343] host is not running, skipping remaining checks
	I0415 10:44:27.114220  163418 status.go:257] multinode-337742-m02 status: &{Name:multinode-337742-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337742 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337742 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.850145932s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337742 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-337742
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337742-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-337742-m02 --driver=docker  --container-runtime=containerd: exit status 14 (79.195742ms)

                                                
                                                
-- stdout --
	* [multinode-337742-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-337742-m02' is duplicated with machine name 'multinode-337742-m02' in profile 'multinode-337742'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337742-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337742-m03 --driver=docker  --container-runtime=containerd: (19.924154943s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-337742
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-337742: exit status 80 (269.252524ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-337742 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-337742-m03 already exists in multinode-337742-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-337742-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-337742-m03: (1.829093216s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.16s)

                                                
                                    
x
+
TestPreload (106.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-765719 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-765719 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m9.148961564s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-765719 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-765719 image pull gcr.io/k8s-minikube/busybox: (1.461589355s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-765719
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-765719: (5.679978011s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-765719 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-765719 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (27.723776422s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-765719 image list
helpers_test.go:175: Cleaning up "test-preload-765719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-765719
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-765719: (2.214820999s)
--- PASS: TestPreload (106.45s)

                                                
                                    
x
+
TestScheduledStopUnix (100.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-227682 --memory=2048 --driver=docker  --container-runtime=containerd
E0415 10:47:36.445728   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:47:49.138528   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-227682 --memory=2048 --driver=docker  --container-runtime=containerd: (23.996323323s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-227682 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-227682 -n scheduled-stop-227682
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-227682 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-227682 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-227682 -n scheduled-stop-227682
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-227682
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-227682 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0415 10:48:59.489108   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-227682
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-227682: exit status 7 (76.064825ms)

                                                
                                                
-- stdout --
	scheduled-stop-227682
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-227682 -n scheduled-stop-227682
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-227682 -n scheduled-stop-227682: exit status 7 (82.901344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-227682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-227682
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-227682: (4.852636484s)
--- PASS: TestScheduledStopUnix (100.27s)

                                                
                                    
x
+
TestInsufficientStorage (9.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-650315 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-650315 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.350849747s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1198475b-8089-4e42-a7ce-f6cad9a5ed20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-650315] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"705a5278-8ad8-4f7b-9bc4-6dfa82ff3b83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18641"}}
	{"specversion":"1.0","id":"52dd8847-dfec-4910-91ec-f4db329d668f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"32e2ee75-5984-49c8-a455-530cd3ab3dbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig"}}
	{"specversion":"1.0","id":"85a4b05f-3c41-4e00-9889-7fadbd646359","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube"}}
	{"specversion":"1.0","id":"4e0f9711-9748-44d4-b0b1-796d32821ad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"eabb713c-26cd-4fa4-a29c-3cd40f5146f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8afd2293-6b2d-4fb7-bb06-b8a411f73b90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5b0a1c7c-a990-40f6-acdf-61e01dbccaf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0d1d57f0-fb89-4ca0-9097-51fed09e5983","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"89d94e1e-0561-47f3-8a49-0cf9e4704c04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4c76baf4-f779-47bc-a6fb-345344e1785f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-650315\" primary control-plane node in \"insufficient-storage-650315\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"69d784c4-a1e8-4a71-ab32-7e62c8380e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1712854342-18621 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"44c276e8-9342-4e63-8fad-dfd5ca6a7342","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0064e59d-aa54-4f78-8583-a9f13f0afa98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-650315 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-650315 --output=json --layout=cluster: exit status 7 (270.429903ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-650315","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-650315","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 10:49:19.901893  185104 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-650315" does not appear in /home/jenkins/minikube-integration/18641-3502/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-650315 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-650315 --output=json --layout=cluster: exit status 7 (269.273248ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-650315","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-650315","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 10:49:20.171772  185194 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-650315" does not appear in /home/jenkins/minikube-integration/18641-3502/kubeconfig
	E0415 10:49:20.181455  185194 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/insufficient-storage-650315/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-650315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-650315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-650315: (1.824639994s)
--- PASS: TestInsufficientStorage (9.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1479113177 start -p running-upgrade-907515 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1479113177 start -p running-upgrade-907515 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (35.025907321s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-907515 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-907515 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.855736545s)
helpers_test.go:175: Cleaning up "running-upgrade-907515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-907515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-907515: (2.913087248s)
--- PASS: TestRunningBinaryUpgrade (68.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (323.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (46.890302018s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-831079
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-831079: (1.21825043s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-831079 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-831079 status --format={{.Host}}: exit status 7 (84.224595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.486569212s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-831079 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (91.20815ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-831079] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-831079
	    minikube start -p kubernetes-upgrade-831079 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8310792 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-831079 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-831079 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4.486474493s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-831079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-831079
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-831079: (2.310682705s)
--- PASS: TestKubernetesUpgrade (323.63s)

                                                
                                    
x
+
TestMissingContainerUpgrade (159.22s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1518576458 start -p missing-upgrade-585284 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1518576458 start -p missing-upgrade-585284 --memory=2200 --driver=docker  --container-runtime=containerd: (1m2.741997474s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-585284
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-585284: (10.289183632s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-585284
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-585284 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-585284 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m23.682981163s)
helpers_test.go:175: Cleaning up "missing-upgrade-585284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-585284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-585284: (1.930330183s)
--- PASS: TestMissingContainerUpgrade (159.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119893 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-119893 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (101.744497ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-119893] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119893 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119893 --driver=docker  --container-runtime=containerd: (33.909442907s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-119893 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-600586 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-600586 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (195.468818ms)

                                                
                                                
-- stdout --
	* [false-600586] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18641
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:49:26.409221  187291 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:49:26.409367  187291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:49:26.409380  187291 out.go:304] Setting ErrFile to fd 2...
	I0415 10:49:26.409387  187291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:49:26.410031  187291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18641-3502/.minikube/bin
	I0415 10:49:26.411212  187291 out.go:298] Setting JSON to false
	I0415 10:49:26.412292  187291 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1917,"bootTime":1713176249,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 10:49:26.412355  187291 start.go:139] virtualization: kvm guest
	I0415 10:49:26.415904  187291 out.go:177] * [false-600586] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 10:49:26.417902  187291 out.go:177]   - MINIKUBE_LOCATION=18641
	I0415 10:49:26.419584  187291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:49:26.417924  187291 notify.go:220] Checking for updates...
	I0415 10:49:26.422951  187291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18641-3502/kubeconfig
	I0415 10:49:26.424670  187291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18641-3502/.minikube
	I0415 10:49:26.426357  187291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 10:49:26.427990  187291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 10:49:26.429982  187291 config.go:182] Loaded profile config "NoKubernetes-119893": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:49:26.430121  187291 config.go:182] Loaded profile config "force-systemd-env-133261": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:49:26.430205  187291 config.go:182] Loaded profile config "offline-containerd-053040": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 10:49:26.430320  187291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:49:26.459631  187291 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 10:49:26.459828  187291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:49:26.517745  187291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:74 SystemTime:2024-04-15 10:49:26.507549626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647976448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0415 10:49:26.517897  187291 docker.go:295] overlay module found
	I0415 10:49:26.524381  187291 out.go:177] * Using the docker driver based on user configuration
	I0415 10:49:26.525799  187291 start.go:297] selected driver: docker
	I0415 10:49:26.525821  187291 start.go:901] validating driver "docker" against <nil>
	I0415 10:49:26.525848  187291 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 10:49:26.528569  187291 out.go:177] 
	W0415 10:49:26.530063  187291 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0415 10:49:26.531423  187291 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-600586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-600586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-600586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600586"

                                                
                                                
----------------------- debugLogs end: false-600586 [took: 7.435692353s] --------------------------------
helpers_test.go:175: Cleaning up "false-600586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-600586
--- PASS: TestNetworkPlugins/group/false (7.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (173.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.394566426 start -p stopped-upgrade-326738 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.394566426 start -p stopped-upgrade-326738 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m27.713625993s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.394566426 -p stopped-upgrade-326738 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.394566426 -p stopped-upgrade-326738 stop: (21.150579741s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-326738 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-326738 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.846230899s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (173.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119893 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119893 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.967634172s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-119893 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-119893 status -o json: exit status 2 (353.190034ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-119893","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-119893
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-119893: (3.427020635s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119893 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119893 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.680240253s)
--- PASS: TestNoKubernetes/serial/Start (5.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-119893 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-119893 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.380771ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-119893
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-119893: (1.195631197s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119893 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119893 --driver=docker  --container-runtime=containerd: (6.218230804s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-119893 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-119893 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.772349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-326738
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestPause/serial/Start (50.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-749261 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-749261 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.006153263s)
--- PASS: TestPause/serial/Start (50.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (48.445319324s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-600586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-600586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-98lt2" [0dde0cdf-14d8-47d5-9bd4-dfb3e842584e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-98lt2" [0dde0cdf-14d8-47d5-9bd4-dfb3e842584e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003689936s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-749261 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-749261 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.326798996s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-749261 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-749261 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-749261 --output=json --layout=cluster: exit status 2 (293.919768ms)

                                                
                                                
-- stdout --
	{"Name":"pause-749261","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-749261","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-749261 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-749261 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.46s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-749261 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-749261 --alsologtostderr -v=5: (2.463374391s)
--- PASS: TestPause/serial/DeletePaused (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-600586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.71s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-749261
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-749261: exit status 1 (14.066165ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-749261: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (49.884583014s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m7.878961079s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f4b96" [75841753-c664-40d5-893d-b2852b04e45c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005541377s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-600586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-600586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5qlsx" [a2a2a9cd-931c-4b8f-b70d-3722b53e7f58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5qlsx" [a2a2a9cd-931c-4b8f-b70d-3722b53e7f58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003803588s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.099624338s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-600586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m21.265728294s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-j6c76" [fbc065cf-6f73-47d7-b703-14291d178c8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005210159s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-600586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-600586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kgtw5" [6e21d9fb-007d-4054-9572-7eb681dd5716] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kgtw5" [6e21d9fb-007d-4054-9572-7eb681dd5716] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003894806s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-600586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.594642611s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-600586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-600586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zr2pf" [5d5c999f-9735-432a-868d-928ebc185119] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zr2pf" [5d5c999f-9735-432a-868d-928ebc185119] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004234173s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-600586 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (54.042304839s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-600586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-580781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-580781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m20.49320596s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cbtll" [38497d1f-33f8-45ef-ab56-a011a8a0f00c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004676075s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-600586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-600586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hvx8d" [7c20d2c4-2f99-473f-acb7-77b42b229a33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hvx8d" [7c20d2c4-2f99-473f-acb7-77b42b229a33] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004113257s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-600586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-600586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p5mzd" [025a56e0-9aec-4d70-9305-c5d83eb8b2d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p5mzd" [025a56e0-9aec-4d70-9305-c5d83eb8b2d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004373735s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-600586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-600586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-600586 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-600586 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vf2g2" [db490922-3b34-45a7-841a-70492cc9242e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vf2g2" [db490922-3b34-45a7-841a-70492cc9242e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004335843s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-600586 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-600586 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)
E0415 11:02:31.811588   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:02:36.443238   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 11:02:38.616364   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 11:02:42.091712   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:49.138800   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-404858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-404858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (1m4.348441909s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-965950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-965950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (55.36062008s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-407255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0415 10:57:36.443246   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/functional-518142/client.crt: no such file or directory
E0415 10:57:49.138113   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-407255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (54.60375148s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-965950 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [893c239e-233a-437b-99d3-2d29a2f07a38] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [893c239e-233a-437b-99d3-2d29a2f07a38] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004405687s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-965950 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-404858 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b71e970c-ed8f-48ae-be83-6a3984f145c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b71e970c-ed8f-48ae-be83-6a3984f145c8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00300911s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-404858 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-965950 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-965950 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-965950 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-965950 --alsologtostderr -v=3: (11.923077588s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-407255 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ddab385-7c04-40ae-a054-ee6fcbdc7923] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ddab385-7c04-40ae-a054-ee6fcbdc7923] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003607138s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-407255 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-404858 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-404858 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-404858 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-404858 --alsologtostderr -v=3: (11.941758535s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-407255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-407255 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-965950 -n embed-certs-965950
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-965950 -n embed-certs-965950: exit status 7 (84.725898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-965950 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-965950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-965950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m22.197121831s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-965950 -n embed-certs-965950
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-407255 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-407255 --alsologtostderr -v=3: (12.038988522s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-404858 -n no-preload-404858
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-404858 -n no-preload-404858: exit status 7 (95.244581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-404858 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-404858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-404858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (4m23.134780136s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-404858 -n no-preload-404858
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255: exit status 7 (100.103991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-407255 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-407255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-407255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m22.758734629s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-580781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93daf65e-3743-493f-bd53-05eba1d88d4d] Pending
helpers_test.go:344: "busybox" [93daf65e-3743-493f-bd53-05eba1d88d4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [93daf65e-3743-493f-bd53-05eba1d88d4d] Running
E0415 10:58:54.120739   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:54.126324   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:54.137003   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:54.157565   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:54.197998   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:54.278992   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:54.439846   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:54.760329   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:55.400884   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:58:56.681521   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004465391s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-580781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-580781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0415 10:58:59.242094   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-580781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-580781 --alsologtostderr -v=3
E0415 10:59:04.363090   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-580781 --alsologtostderr -v=3: (12.479042646s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-580781 -n old-k8s-version-580781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-580781 -n old-k8s-version-580781: exit status 7 (91.415579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-580781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (120.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-580781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0415 10:59:14.603511   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:59:35.083812   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 10:59:54.773979   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:54.779246   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:54.789518   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:54.809796   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:54.850074   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:54.930371   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:55.091275   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:55.411878   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:56.052221   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:57.333088   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 10:59:59.893776   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 11:00:05.014515   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 11:00:15.254917   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 11:00:16.044435   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 11:00:30.468109   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:30.473397   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:30.483671   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:30.504000   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:30.544306   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:30.624720   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:30.785113   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:31.105976   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:31.746880   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:33.027665   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:35.587891   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:35.735115   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
E0415 11:00:40.708696   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:50.949216   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:00:52.194739   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/addons-798865/client.crt: no such file or directory
E0415 11:01:00.181796   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:00.187152   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:00.197422   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:00.217752   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:00.258009   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:00.338299   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:00.498846   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:00.819504   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:01.459711   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:02.740717   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:05.301815   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:10.422622   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:11.429987   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-580781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (1m59.855230228s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-580781 -n old-k8s-version-580781
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (120.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-z5b98" [c2dcc71b-f839-4e71-bd71-6f51b573555f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-z5b98" [c2dcc71b-f839-4e71-bd71-6f51b573555f] Running
E0415 11:01:16.695288   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/kindnet-600586/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004003811s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-z5b98" [c2dcc71b-f839-4e71-bd71-6f51b573555f] Running
E0415 11:01:20.662942   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003517014s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-580781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-580781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-580781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-580781 -n old-k8s-version-580781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-580781 -n old-k8s-version-580781: exit status 2 (307.624079ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-580781 -n old-k8s-version-580781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-580781 -n old-k8s-version-580781: exit status 2 (296.498148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-580781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-580781 -n old-k8s-version-580781
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-580781 -n old-k8s-version-580781
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-838906 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0415 11:01:37.965316   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/auto-600586/client.crt: no such file or directory
E0415 11:01:41.143718   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
E0415 11:01:42.075157   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:42.081243   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:42.091615   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:42.112691   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:42.153784   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:42.234191   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:42.394366   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:42.715087   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:43.355821   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:44.636840   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:47.197583   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:50.849802   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:50.855077   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:50.865365   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:50.885689   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:50.925980   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:51.006350   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:51.166810   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:51.487420   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:52.127753   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:52.318257   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:01:52.390550   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
E0415 11:01:53.408214   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:01:55.969244   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:02:01.089848   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:02:01.130143   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:01.135406   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:01.145676   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:01.165998   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:01.206823   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:01.287156   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:01.447574   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:01.768542   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:02.409249   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:02.558773   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
E0415 11:02:03.689379   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:06.249846   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-838906 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (37.329102374s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-838906 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-838906 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-838906 --alsologtostderr -v=3: (1.228474902s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-838906 -n newest-cni-838906
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-838906 -n newest-cni-838906: exit status 7 (76.198739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-838906 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-838906 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0415 11:02:11.330723   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/enable-default-cni-600586/client.crt: no such file or directory
E0415 11:02:11.370154   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:21.611007   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
E0415 11:02:22.103950   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/custom-flannel-600586/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-838906 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (13.110198644s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-838906 -n newest-cni-838906
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-838906 image list --format=json
E0415 11:02:23.039806   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-838906 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-838906 -n newest-cni-838906
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-838906 -n newest-cni-838906: exit status 2 (309.316774ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-838906 -n newest-cni-838906
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-838906 -n newest-cni-838906: exit status 2 (307.210073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-838906 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-838906 -n newest-cni-838906
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-838906 -n newest-cni-838906
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8kwz9" [26bd99b4-7b53-4bc7-9df1-e11d750b4ae6] Running
E0415 11:03:04.000778   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/flannel-600586/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004147374s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8kwz9" [26bd99b4-7b53-4bc7-9df1-e11d750b4ae6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003555051s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-965950 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-49vl2" [e06b9e0a-3658-4b0c-bb04-6be35a2ead4b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003780795s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-965950 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-965950 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-965950 -n embed-certs-965950
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-965950 -n embed-certs-965950: exit status 2 (294.375565ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-965950 -n embed-certs-965950
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-965950 -n embed-certs-965950: exit status 2 (296.599274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-965950 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-965950 -n embed-certs-965950
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-965950 -n embed-certs-965950
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnnxh" [d1ecf968-eac9-46f7-8938-1bd673b2b543] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003793288s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-49vl2" [e06b9e0a-3658-4b0c-bb04-6be35a2ead4b] Running
E0415 11:03:14.311105   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/calico-600586/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003683077s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-404858 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnnxh" [d1ecf968-eac9-46f7-8938-1bd673b2b543] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003946377s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-407255 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-404858 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-404858 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-404858 -n no-preload-404858
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-404858 -n no-preload-404858: exit status 2 (289.91463ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-404858 -n no-preload-404858
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-404858 -n no-preload-404858: exit status 2 (297.878676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-404858 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-404858 -n no-preload-404858
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-404858 -n no-preload-404858
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-407255 image list --format=json
E0415 11:03:23.051988   10319 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18641-3502/.minikube/profiles/bridge-600586/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-407255 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255: exit status 2 (285.784083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255: exit status 2 (285.791213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-407255 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-407255 -n default-k8s-diff-port-407255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                    

Test skip (26/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-600586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-600586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-600586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600586"

                                                
                                                
----------------------- debugLogs end: kubenet-600586 [took: 4.152623582s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-600586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-600586
--- SKIP: TestNetworkPlugins/group/kubenet (4.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-600586 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-600586" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-600586

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-600586" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600586"

                                                
                                                
----------------------- debugLogs end: cilium-600586 [took: 4.198532851s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-600586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-600586
--- SKIP: TestNetworkPlugins/group/cilium (4.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-828595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-828595
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard