Test Report: Docker_Linux_crio 18169

                    
                      248a87e642b5c2a9040ef2ce1129e71918aa65a4:2024-02-13:33129
                    
                

Test fail (3/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.73
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.07
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.78
x
+
TestAddons/parallel/Ingress (154.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-913502 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-913502 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-913502 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6a367c0e-00d0-4f0c-a462-bf6e428f5d03] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6a367c0e-00d0-4f0c-a462-bf6e428f5d03] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004072724s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-913502 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.459358995s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-913502 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-913502 addons disable ingress-dns --alsologtostderr -v=1: (1.389781743s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-913502 addons disable ingress --alsologtostderr -v=1: (7.636097846s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-913502
helpers_test.go:235: (dbg) docker inspect addons-913502:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a3c4bea7929182449776c05fa455c4211c81c7e833202acb79be3ab764f9ccb",
	        "Created": "2024-02-13T23:01:55.571688939Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 75592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:01:55.862518621Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3a3c4bea7929182449776c05fa455c4211c81c7e833202acb79be3ab764f9ccb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a3c4bea7929182449776c05fa455c4211c81c7e833202acb79be3ab764f9ccb/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a3c4bea7929182449776c05fa455c4211c81c7e833202acb79be3ab764f9ccb/hosts",
	        "LogPath": "/var/lib/docker/containers/3a3c4bea7929182449776c05fa455c4211c81c7e833202acb79be3ab764f9ccb/3a3c4bea7929182449776c05fa455c4211c81c7e833202acb79be3ab764f9ccb-json.log",
	        "Name": "/addons-913502",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-913502:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-913502",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3e8f3d3d2a707c488f0937be59982e109a3023b56a47058f35f66fe824106805-init/diff:/var/lib/docker/overlay2/4fe14e78c622f13dfc4094e03ac245950865fc60884691f5477756f62ef198c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e8f3d3d2a707c488f0937be59982e109a3023b56a47058f35f66fe824106805/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e8f3d3d2a707c488f0937be59982e109a3023b56a47058f35f66fe824106805/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e8f3d3d2a707c488f0937be59982e109a3023b56a47058f35f66fe824106805/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-913502",
	                "Source": "/var/lib/docker/volumes/addons-913502/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-913502",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-913502",
	                "name.minikube.sigs.k8s.io": "addons-913502",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "977adb16ddd7936c943d24cf6bbed1c2fbcc9892f88187bad5ac1a30fc183f68",
	            "SandboxKey": "/var/run/docker/netns/977adb16ddd7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-913502": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3a3c4bea7929",
	                        "addons-913502"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "21cd6f2cdff20306f8d612e0007531ff7b32f4461a8e90920eb3d5695c5858c9",
	                    "EndpointID": "58e39049db5d55c8df8a1b40c1fbde3584d2fa7404d203924a57554b2890a41a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-913502",
	                        "3a3c4bea7929"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-913502 -n addons-913502
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-913502 logs -n 25: (1.197947015s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-658548                                                                     | download-only-658548   | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| delete  | -p download-only-940739                                                                     | download-only-940739   | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| start   | --download-only -p                                                                          | download-docker-574132 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | download-docker-574132                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-574132                                                                   | download-docker-574132 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-182974   | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | binary-mirror-182974                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45437                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-182974                                                                     | binary-mirror-182974   | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| addons  | enable dashboard -p                                                                         | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | addons-913502                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | addons-913502                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-913502 --wait=true                                                                | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | -p addons-913502                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-913502 addons disable                                                                | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-913502 ip                                                                            | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	| addons  | addons-913502 addons disable                                                                | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-913502 addons                                                                        | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | addons-913502                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | -p addons-913502                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-913502 ssh curl -s                                                                   | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | addons-913502                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-913502 ssh cat                                                                       | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | /opt/local-path-provisioner/pvc-d0c9bd29-9bf8-4b15-8147-542eee087336_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-913502 addons disable                                                                | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:04 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-913502 addons                                                                        | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-913502 addons                                                                        | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:05 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-913502 ip                                                                            | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:06 UTC | 13 Feb 24 23:06 UTC |
	| addons  | addons-913502 addons disable                                                                | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:06 UTC | 13 Feb 24 23:06 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-913502 addons disable                                                                | addons-913502          | jenkins | v1.32.0 | 13 Feb 24 23:06 UTC | 13 Feb 24 23:06 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:01:33
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:01:33.802457   74928 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:01:33.802751   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:33.802764   74928 out.go:304] Setting ErrFile to fd 2...
	I0213 23:01:33.802772   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:33.802964   74928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:01:33.803588   74928 out.go:298] Setting JSON to false
	I0213 23:01:33.804445   74928 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6241,"bootTime":1707859053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:01:33.804522   74928 start.go:138] virtualization: kvm guest
	I0213 23:01:33.807145   74928 out.go:177] * [addons-913502] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:01:33.808659   74928 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 23:01:33.808736   74928 notify.go:220] Checking for updates...
	I0213 23:01:33.810211   74928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:01:33.811749   74928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:01:33.813408   74928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:01:33.814940   74928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:01:33.816553   74928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:01:33.818237   74928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:01:33.838950   74928 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:01:33.839081   74928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:33.887533   74928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-13 23:01:33.879264713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:33.887640   74928 docker.go:295] overlay module found
	I0213 23:01:33.889521   74928 out.go:177] * Using the docker driver based on user configuration
	I0213 23:01:33.890753   74928 start.go:298] selected driver: docker
	I0213 23:01:33.890764   74928 start.go:902] validating driver "docker" against <nil>
	I0213 23:01:33.890773   74928 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:01:33.891502   74928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:33.939403   74928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-13 23:01:33.931405034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:33.939552   74928 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 23:01:33.939753   74928 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:01:33.941528   74928 out.go:177] * Using Docker driver with root privileges
	I0213 23:01:33.943037   74928 cni.go:84] Creating CNI manager for ""
	I0213 23:01:33.943056   74928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:01:33.943064   74928 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0213 23:01:33.943079   74928 start_flags.go:321] config:
	{Name:addons-913502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-913502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:01:33.944525   74928 out.go:177] * Starting control plane node addons-913502 in cluster addons-913502
	I0213 23:01:33.945838   74928 cache.go:121] Beginning downloading kic base image for docker with crio
	I0213 23:01:33.947351   74928 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 23:01:33.948796   74928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:01:33.948823   74928 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 23:01:33.948830   74928 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:01:33.948842   74928 cache.go:56] Caching tarball of preloaded images
	I0213 23:01:33.948922   74928 preload.go:174] Found /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:01:33.948932   74928 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 23:01:33.949212   74928 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/config.json ...
	I0213 23:01:33.949233   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/config.json: {Name:mkda011f67b178e30142640d19faa773cc1a510c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:33.963354   74928 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 23:01:33.963477   74928 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 23:01:33.963499   74928 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0213 23:01:33.963504   74928 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0213 23:01:33.963517   74928 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 23:01:33.963528   74928 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0213 23:01:45.779067   74928 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0213 23:01:45.779130   74928 cache.go:194] Successfully downloaded all kic artifacts
	I0213 23:01:45.779176   74928 start.go:365] acquiring machines lock for addons-913502: {Name:mkef1676655d6663ccf6dbaf971e7bc2d4264742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:01:45.779300   74928 start.go:369] acquired machines lock for "addons-913502" in 101.791µs
	I0213 23:01:45.779340   74928 start.go:93] Provisioning new machine with config: &{Name:addons-913502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-913502 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:01:45.779457   74928 start.go:125] createHost starting for "" (driver="docker")
	I0213 23:01:45.842307   74928 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0213 23:01:45.842640   74928 start.go:159] libmachine.API.Create for "addons-913502" (driver="docker")
	I0213 23:01:45.842698   74928 client.go:168] LocalClient.Create starting
	I0213 23:01:45.842847   74928 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem
	I0213 23:01:45.961344   74928 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem
	I0213 23:01:46.263337   74928 cli_runner.go:164] Run: docker network inspect addons-913502 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 23:01:46.278717   74928 cli_runner.go:211] docker network inspect addons-913502 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 23:01:46.278800   74928 network_create.go:281] running [docker network inspect addons-913502] to gather additional debugging logs...
	I0213 23:01:46.278825   74928 cli_runner.go:164] Run: docker network inspect addons-913502
	W0213 23:01:46.293546   74928 cli_runner.go:211] docker network inspect addons-913502 returned with exit code 1
	I0213 23:01:46.293580   74928 network_create.go:284] error running [docker network inspect addons-913502]: docker network inspect addons-913502: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-913502 not found
	I0213 23:01:46.293598   74928 network_create.go:286] output of [docker network inspect addons-913502]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-913502 not found
	
	** /stderr **
	I0213 23:01:46.293706   74928 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 23:01:46.308813   74928 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00217fe90}
	I0213 23:01:46.308858   74928 network_create.go:124] attempt to create docker network addons-913502 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0213 23:01:46.308942   74928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-913502 addons-913502
	I0213 23:01:46.416637   74928 network_create.go:108] docker network addons-913502 192.168.49.0/24 created
	I0213 23:01:46.416681   74928 kic.go:121] calculated static IP "192.168.49.2" for the "addons-913502" container
	I0213 23:01:46.416755   74928 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 23:01:46.431038   74928 cli_runner.go:164] Run: docker volume create addons-913502 --label name.minikube.sigs.k8s.io=addons-913502 --label created_by.minikube.sigs.k8s.io=true
	I0213 23:01:46.509179   74928 oci.go:103] Successfully created a docker volume addons-913502
	I0213 23:01:46.509285   74928 cli_runner.go:164] Run: docker run --rm --name addons-913502-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-913502 --entrypoint /usr/bin/test -v addons-913502:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 23:01:50.336422   74928 cli_runner.go:217] Completed: docker run --rm --name addons-913502-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-913502 --entrypoint /usr/bin/test -v addons-913502:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (3.827083726s)
	I0213 23:01:50.336492   74928 oci.go:107] Successfully prepared a docker volume addons-913502
	I0213 23:01:50.336511   74928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:01:50.336537   74928 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 23:01:50.336602   74928 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-913502:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 23:01:55.507474   74928 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-913502:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.17082s)
	I0213 23:01:55.507510   74928 kic.go:203] duration metric: took 5.170972 seconds to extract preloaded images to volume
	W0213 23:01:55.507709   74928 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0213 23:01:55.507913   74928 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 23:01:55.557651   74928 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-913502 --name addons-913502 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-913502 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-913502 --network addons-913502 --ip 192.168.49.2 --volume addons-913502:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 23:01:55.870673   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Running}}
	I0213 23:01:55.887910   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:01:55.905354   74928 cli_runner.go:164] Run: docker exec addons-913502 stat /var/lib/dpkg/alternatives/iptables
	I0213 23:01:55.944529   74928 oci.go:144] the created container "addons-913502" has a running status.
	I0213 23:01:55.944562   74928 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa...
	I0213 23:01:56.351673   74928 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 23:01:56.372514   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:01:56.388762   74928 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 23:01:56.388784   74928 kic_runner.go:114] Args: [docker exec --privileged addons-913502 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 23:01:56.442339   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:01:56.458982   74928 machine.go:88] provisioning docker machine ...
	I0213 23:01:56.459083   74928 ubuntu.go:169] provisioning hostname "addons-913502"
	I0213 23:01:56.459174   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:56.475502   74928 main.go:141] libmachine: Using SSH client type: native
	I0213 23:01:56.475851   74928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0213 23:01:56.475871   74928 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-913502 && echo "addons-913502" | sudo tee /etc/hostname
	I0213 23:01:56.623054   74928 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-913502
	
	I0213 23:01:56.623165   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:56.640216   74928 main.go:141] libmachine: Using SSH client type: native
	I0213 23:01:56.640605   74928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0213 23:01:56.640624   74928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-913502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-913502/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-913502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:01:56.772385   74928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:01:56.772414   74928 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18169-66678/.minikube CaCertPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18169-66678/.minikube}
	I0213 23:01:56.772442   74928 ubuntu.go:177] setting up certificates
	I0213 23:01:56.772452   74928 provision.go:83] configureAuth start
	I0213 23:01:56.772509   74928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-913502
	I0213 23:01:56.788622   74928 provision.go:138] copyHostCerts
	I0213 23:01:56.788692   74928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18169-66678/.minikube/ca.pem (1078 bytes)
	I0213 23:01:56.788815   74928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18169-66678/.minikube/cert.pem (1123 bytes)
	I0213 23:01:56.788880   74928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18169-66678/.minikube/key.pem (1679 bytes)
	I0213 23:01:56.788946   74928 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca-key.pem org=jenkins.addons-913502 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-913502]
	I0213 23:01:56.904275   74928 provision.go:172] copyRemoteCerts
	I0213 23:01:56.904360   74928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:01:56.904402   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:56.920491   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:01:57.016710   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:01:57.038007   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:01:57.058913   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0213 23:01:57.080012   74928 provision.go:86] duration metric: configureAuth took 307.543796ms
	I0213 23:01:57.080041   74928 ubuntu.go:193] setting minikube options for container-runtime
	I0213 23:01:57.080242   74928 config.go:182] Loaded profile config "addons-913502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:01:57.080401   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:57.096275   74928 main.go:141] libmachine: Using SSH client type: native
	I0213 23:01:57.096782   74928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0213 23:01:57.096808   74928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:01:57.314209   74928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:01:57.314236   74928 machine.go:91] provisioned docker machine in 855.228843ms
	I0213 23:01:57.314248   74928 client.go:171] LocalClient.Create took 11.471539737s
	I0213 23:01:57.314274   74928 start.go:167] duration metric: libmachine.API.Create for "addons-913502" took 11.471638568s
	I0213 23:01:57.314286   74928 start.go:300] post-start starting for "addons-913502" (driver="docker")
	I0213 23:01:57.314301   74928 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:01:57.314361   74928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:01:57.314409   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:57.330000   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:01:57.424767   74928 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:01:57.427764   74928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 23:01:57.427806   74928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 23:01:57.427815   74928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 23:01:57.427822   74928 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 23:01:57.427833   74928 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-66678/.minikube/addons for local assets ...
	I0213 23:01:57.427905   74928 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-66678/.minikube/files for local assets ...
	I0213 23:01:57.427932   74928 start.go:303] post-start completed in 113.639134ms
	I0213 23:01:57.428175   74928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-913502
	I0213 23:01:57.444115   74928 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/config.json ...
	I0213 23:01:57.444426   74928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 23:01:57.444484   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:57.459446   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:01:57.548937   74928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 23:01:57.552957   74928 start.go:128] duration metric: createHost completed in 11.77348393s
	I0213 23:01:57.552985   74928 start.go:83] releasing machines lock for "addons-913502", held for 11.773671818s
	I0213 23:01:57.553074   74928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-913502
	I0213 23:01:57.568654   74928 ssh_runner.go:195] Run: cat /version.json
	I0213 23:01:57.568712   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:57.568743   74928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:01:57.568805   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:01:57.584943   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:01:57.585296   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:01:57.765743   74928 ssh_runner.go:195] Run: systemctl --version
	I0213 23:01:57.769918   74928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:01:57.906448   74928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 23:01:57.910852   74928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:01:57.928667   74928 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0213 23:01:57.928758   74928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:01:57.954316   74928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0213 23:01:57.954348   74928 start.go:475] detecting cgroup driver to use...
	I0213 23:01:57.954383   74928 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 23:01:57.954475   74928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:01:57.968671   74928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:01:57.978853   74928 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:01:57.978923   74928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:01:57.990968   74928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:01:58.004646   74928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:01:58.089369   74928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:01:58.165327   74928 docker.go:233] disabling docker service ...
	I0213 23:01:58.165394   74928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:01:58.182424   74928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:01:58.192691   74928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:01:58.272866   74928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:01:58.353308   74928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:01:58.363415   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:01:58.378037   74928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:01:58.378098   74928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:01:58.386524   74928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:01:58.386589   74928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:01:58.395177   74928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:01:58.403730   74928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:01:58.412174   74928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:01:58.419957   74928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:01:58.427182   74928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:01:58.434497   74928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:01:58.504415   74928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:01:58.594502   74928 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:01:58.594583   74928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:01:58.598030   74928 start.go:543] Will wait 60s for crictl version
	I0213 23:01:58.598086   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:01:58.601067   74928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:01:58.633715   74928 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0213 23:01:58.633820   74928 ssh_runner.go:195] Run: crio --version
	I0213 23:01:58.667569   74928 ssh_runner.go:195] Run: crio --version
	I0213 23:01:58.702347   74928 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0213 23:01:58.703965   74928 cli_runner.go:164] Run: docker network inspect addons-913502 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 23:01:58.719309   74928 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0213 23:01:58.722871   74928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:01:58.732851   74928 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:01:58.732915   74928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:01:58.786394   74928 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:01:58.786417   74928 crio.go:415] Images already preloaded, skipping extraction
	I0213 23:01:58.786478   74928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:01:58.818329   74928 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:01:58.818354   74928 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:01:58.818439   74928 ssh_runner.go:195] Run: crio config
	I0213 23:01:58.858847   74928 cni.go:84] Creating CNI manager for ""
	I0213 23:01:58.858866   74928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:01:58.858885   74928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:01:58.858910   74928 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-913502 NodeName:addons-913502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:01:58.859068   74928 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-913502"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:01:58.859142   74928 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-913502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-913502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:01:58.859207   74928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:01:58.868897   74928 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:01:58.868971   74928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:01:58.876778   74928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0213 23:01:58.892232   74928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:01:58.907990   74928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0213 23:01:58.923492   74928 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0213 23:01:58.926829   74928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:01:58.936256   74928 certs.go:56] Setting up /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502 for IP: 192.168.49.2
	I0213 23:01:58.936295   74928 certs.go:190] acquiring lock for shared ca certs: {Name:mkdb62e9ebaf532b9b3d230de7912db241faf3db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:58.936450   74928 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.key
	I0213 23:01:59.093397   74928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt ...
	I0213 23:01:59.093429   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt: {Name:mkcf281713fd12da39950efae50854b08ec69f43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.093632   74928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/ca.key ...
	I0213 23:01:59.093649   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/ca.key: {Name:mk225d429655218b9b579662fe6463af54f8cb85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.093749   74928 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.key
	I0213 23:01:59.395159   74928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.crt ...
	I0213 23:01:59.395189   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.crt: {Name:mk05d31287cd9bb468f5aae5f083e3b0be506f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.395384   74928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.key ...
	I0213 23:01:59.395403   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.key: {Name:mkfc546f26fcdc0c4a4d3a5ba65de81c69e801b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.395728   74928 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.key
	I0213 23:01:59.395752   74928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt with IP's: []
	I0213 23:01:59.526797   74928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt ...
	I0213 23:01:59.526832   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: {Name:mk4a811e002ff90cce18e260f32e0acf1acb4d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.527016   74928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.key ...
	I0213 23:01:59.527033   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.key: {Name:mkd8c37d87ca35621b5c85f65432c3c210e0308f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.527125   74928 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.key.dd3b5fb2
	I0213 23:01:59.527147   74928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 23:01:59.621220   74928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.crt.dd3b5fb2 ...
	I0213 23:01:59.621254   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.crt.dd3b5fb2: {Name:mk94d711cbc838f540a49336000043c160510b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.621453   74928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.key.dd3b5fb2 ...
	I0213 23:01:59.621474   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.key.dd3b5fb2: {Name:mkb47d1757d7852a332b9309583376038449085f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.621579   74928 certs.go:337] copying /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.crt
	I0213 23:01:59.621683   74928 certs.go:341] copying /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.key
	I0213 23:01:59.621755   74928 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.key
	I0213 23:01:59.621776   74928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.crt with IP's: []
	I0213 23:01:59.876903   74928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.crt ...
	I0213 23:01:59.876941   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.crt: {Name:mk87a7369728a86f2a86e241b41de8175819e02a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.877140   74928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.key ...
	I0213 23:01:59.877162   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.key: {Name:mka4e4f7ea3dd20887482b9f5ba6abf9b502b86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:59.877393   74928 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca-key.pem (1679 bytes)
	I0213 23:01:59.877438   74928 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:01:59.877477   74928 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:01:59.877511   74928 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/key.pem (1679 bytes)
	I0213 23:01:59.878116   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:01:59.899794   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:01:59.920403   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:01:59.941045   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:01:59.962196   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:01:59.982676   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:02:00.003213   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:02:00.023877   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:02:00.044456   74928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:02:00.065346   74928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:02:00.081154   74928 ssh_runner.go:195] Run: openssl version
	I0213 23:02:00.086740   74928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:02:00.095160   74928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:02:00.098476   74928 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:02:00.098528   74928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:02:00.104713   74928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:02:00.112730   74928 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:02:00.115556   74928 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 23:02:00.115633   74928 kubeadm.go:404] StartCluster: {Name:addons-913502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-913502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:02:00.115705   74928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:02:00.115740   74928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:02:00.147868   74928 cri.go:89] found id: ""
	I0213 23:02:00.147937   74928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:02:00.155791   74928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:02:00.163702   74928 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 23:02:00.163759   74928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:02:00.171567   74928 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:02:00.171605   74928 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 23:02:00.250786   74928 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0213 23:02:00.313250   74928 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:02:09.611168   74928 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:02:09.611248   74928 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:02:09.611357   74928 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0213 23:02:09.611435   74928 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0213 23:02:09.611481   74928 kubeadm.go:322] OS: Linux
	I0213 23:02:09.611517   74928 kubeadm.go:322] CGROUPS_CPU: enabled
	I0213 23:02:09.611560   74928 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0213 23:02:09.611596   74928 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0213 23:02:09.611637   74928 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0213 23:02:09.611674   74928 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0213 23:02:09.611716   74928 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0213 23:02:09.611754   74928 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0213 23:02:09.611791   74928 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0213 23:02:09.611827   74928 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0213 23:02:09.611905   74928 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:02:09.612028   74928 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:02:09.612113   74928 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:02:09.612167   74928 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:02:09.613734   74928 out.go:204]   - Generating certificates and keys ...
	I0213 23:02:09.613815   74928 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:02:09.613884   74928 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:02:09.613967   74928 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 23:02:09.614015   74928 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 23:02:09.614104   74928 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 23:02:09.614191   74928 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 23:02:09.614253   74928 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 23:02:09.614380   74928 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-913502 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 23:02:09.614439   74928 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 23:02:09.614583   74928 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-913502 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 23:02:09.614642   74928 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 23:02:09.614694   74928 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 23:02:09.614736   74928 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 23:02:09.614785   74928 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:02:09.614825   74928 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:02:09.614867   74928 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:02:09.614960   74928 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:02:09.615035   74928 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:02:09.615121   74928 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:02:09.615193   74928 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:02:09.616897   74928 out.go:204]   - Booting up control plane ...
	I0213 23:02:09.616996   74928 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:02:09.617062   74928 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:02:09.617136   74928 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:02:09.617263   74928 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:02:09.617376   74928 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:02:09.617417   74928 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:02:09.617535   74928 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:02:09.617593   74928 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002702 seconds
	I0213 23:02:09.617672   74928 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:02:09.617784   74928 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:02:09.617842   74928 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:02:09.617985   74928 kubeadm.go:322] [mark-control-plane] Marking the node addons-913502 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:02:09.618057   74928 kubeadm.go:322] [bootstrap-token] Using token: unxy31.n4ikrym8ylskkum4
	I0213 23:02:09.619538   74928 out.go:204]   - Configuring RBAC rules ...
	I0213 23:02:09.619664   74928 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:02:09.619778   74928 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:02:09.619970   74928 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:02:09.620128   74928 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:02:09.620289   74928 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:02:09.620439   74928 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:02:09.620590   74928 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:02:09.620663   74928 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:02:09.620711   74928 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:02:09.620717   74928 kubeadm.go:322] 
	I0213 23:02:09.620777   74928 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:02:09.620783   74928 kubeadm.go:322] 
	I0213 23:02:09.620889   74928 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:02:09.620901   74928 kubeadm.go:322] 
	I0213 23:02:09.620936   74928 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:02:09.620981   74928 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:02:09.621023   74928 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:02:09.621028   74928 kubeadm.go:322] 
	I0213 23:02:09.621071   74928 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:02:09.621077   74928 kubeadm.go:322] 
	I0213 23:02:09.621111   74928 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:02:09.621117   74928 kubeadm.go:322] 
	I0213 23:02:09.621163   74928 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:02:09.621226   74928 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:02:09.621279   74928 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:02:09.621284   74928 kubeadm.go:322] 
	I0213 23:02:09.621350   74928 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:02:09.621408   74928 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:02:09.621415   74928 kubeadm.go:322] 
	I0213 23:02:09.621476   74928 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token unxy31.n4ikrym8ylskkum4 \
	I0213 23:02:09.621555   74928 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:65a739a3fc766348b9b774a07bf25aabb4395eca8f80a3b593899c4975cd65db \
	I0213 23:02:09.621572   74928 kubeadm.go:322] 	--control-plane 
	I0213 23:02:09.621579   74928 kubeadm.go:322] 
	I0213 23:02:09.621682   74928 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:02:09.621698   74928 kubeadm.go:322] 
	I0213 23:02:09.621772   74928 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token unxy31.n4ikrym8ylskkum4 \
	I0213 23:02:09.621944   74928 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:65a739a3fc766348b9b774a07bf25aabb4395eca8f80a3b593899c4975cd65db 
	I0213 23:02:09.621978   74928 cni.go:84] Creating CNI manager for ""
	I0213 23:02:09.621990   74928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:02:09.623781   74928 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0213 23:02:09.625221   74928 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0213 23:02:09.661187   74928 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0213 23:02:09.661212   74928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0213 23:02:09.677721   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0213 23:02:10.340192   74928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:02:10.340272   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:10.340286   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802 minikube.k8s.io/name=addons-913502 minikube.k8s.io/updated_at=2024_02_13T23_02_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:10.417873   74928 ops.go:34] apiserver oom_adj: -16
	I0213 23:02:10.418015   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:10.918402   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:11.418317   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:11.918115   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:12.419052   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:12.918741   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:13.418656   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:13.918840   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:14.418821   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:14.918986   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:15.418873   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:15.919084   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:16.418610   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:16.918097   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:17.418501   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:17.918815   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:18.418264   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:18.918398   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:19.418543   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:19.918327   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:20.418500   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:20.918842   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:21.418863   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:21.918604   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:22.418857   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:22.918861   74928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:02:22.982651   74928 kubeadm.go:1088] duration metric: took 12.642436469s to wait for elevateKubeSystemPrivileges.
	I0213 23:02:22.982687   74928 kubeadm.go:406] StartCluster complete in 22.867058371s
	I0213 23:02:22.982711   74928 settings.go:142] acquiring lock: {Name:mk89817e7b00c42ae84864184d25a5290738d17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:02:22.982831   74928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:02:22.983213   74928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/kubeconfig: {Name:mk1392731503c3f5245f6110a90036e5311cfc32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:02:22.983472   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:02:22.983491   74928 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0213 23:02:22.983587   74928 addons.go:69] Setting yakd=true in profile "addons-913502"
	I0213 23:02:22.983613   74928 addons.go:234] Setting addon yakd=true in "addons-913502"
	I0213 23:02:22.983656   74928 addons.go:69] Setting ingress-dns=true in profile "addons-913502"
	I0213 23:02:22.983676   74928 addons.go:234] Setting addon ingress-dns=true in "addons-913502"
	I0213 23:02:22.983676   74928 addons.go:69] Setting registry=true in profile "addons-913502"
	I0213 23:02:22.983697   74928 config.go:182] Loaded profile config "addons-913502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:02:22.983708   74928 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-913502"
	I0213 23:02:22.983715   74928 addons.go:69] Setting storage-provisioner=true in profile "addons-913502"
	I0213 23:02:22.983727   74928 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-913502"
	I0213 23:02:22.983731   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.983744   74928 addons.go:69] Setting inspektor-gadget=true in profile "addons-913502"
	I0213 23:02:22.983750   74928 addons.go:69] Setting volumesnapshots=true in profile "addons-913502"
	I0213 23:02:22.983760   74928 addons.go:234] Setting addon inspektor-gadget=true in "addons-913502"
	I0213 23:02:22.983770   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.983771   74928 addons.go:234] Setting addon volumesnapshots=true in "addons-913502"
	I0213 23:02:22.983742   74928 addons.go:69] Setting metrics-server=true in profile "addons-913502"
	I0213 23:02:22.983781   74928 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-913502"
	I0213 23:02:22.983793   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.983804   74928 addons.go:234] Setting addon metrics-server=true in "addons-913502"
	I0213 23:02:22.983810   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.983858   74928 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-913502"
	I0213 23:02:22.983860   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.983699   74928 addons.go:234] Setting addon registry=true in "addons-913502"
	I0213 23:02:22.983910   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.983938   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.984184   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.984258   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.984269   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.984283   74928 addons.go:69] Setting helm-tiller=true in profile "addons-913502"
	I0213 23:02:22.984296   74928 addons.go:234] Setting addon helm-tiller=true in "addons-913502"
	I0213 23:02:22.984353   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.983727   74928 addons.go:234] Setting addon storage-provisioner=true in "addons-913502"
	I0213 23:02:22.984388   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.984402   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.984530   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.984846   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.985007   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.983683   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.985740   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.985796   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.983743   74928 addons.go:69] Setting default-storageclass=true in profile "addons-913502"
	I0213 23:02:22.990881   74928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-913502"
	I0213 23:02:22.983771   74928 addons.go:69] Setting cloud-spanner=true in profile "addons-913502"
	I0213 23:02:22.991095   74928 addons.go:234] Setting addon cloud-spanner=true in "addons-913502"
	I0213 23:02:22.991160   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.991725   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.991911   74928 addons.go:69] Setting ingress=true in profile "addons-913502"
	I0213 23:02:22.991932   74928 addons.go:234] Setting addon ingress=true in "addons-913502"
	I0213 23:02:22.991993   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:22.984269   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.993062   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:22.983735   74928 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-913502"
	I0213 23:02:23.002086   74928 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-913502"
	I0213 23:02:23.002506   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:23.002837   74928 addons.go:69] Setting gcp-auth=true in profile "addons-913502"
	I0213 23:02:23.002878   74928 mustload.go:65] Loading cluster: addons-913502
	I0213 23:02:23.003106   74928 config.go:182] Loaded profile config "addons-913502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:02:23.003417   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:23.004309   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:23.031862   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0213 23:02:23.033835   74928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0213 23:02:23.033908   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0213 23:02:23.034001   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.032532   74928 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0213 23:02:23.036190   74928 out.go:177]   - Using image docker.io/registry:2.8.3
	I0213 23:02:23.037837   74928 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0213 23:02:23.036167   74928 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:02:23.039354   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:02:23.039417   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.039569   74928 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0213 23:02:23.039586   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0213 23:02:23.039667   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.051407   74928 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0213 23:02:23.047005   74928 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-913502"
	I0213 23:02:23.053844   74928 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0213 23:02:23.054002   74928 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:02:23.054046   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:23.060044   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0213 23:02:23.060130   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0213 23:02:23.060136   74928 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0213 23:02:23.065283   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:23.065293   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:23.065952   74928 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0213 23:02:23.066282   74928 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0213 23:02:23.068515   74928 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0213 23:02:23.068601   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.070874   74928 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:02:23.078635   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0213 23:02:23.073093   74928 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 23:02:23.073162   74928 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0213 23:02:23.073234   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.073375   74928 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0213 23:02:23.073451   74928 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0213 23:02:23.073943   74928 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 23:02:23.073959   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:02:23.081254   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.081475   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0213 23:02:23.081536   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.081666   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0213 23:02:23.081714   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.084398   74928 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0213 23:02:23.084419   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0213 23:02:23.084478   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.083047   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0213 23:02:23.086824   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.083132   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0213 23:02:23.087039   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.089123   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0213 23:02:23.090761   74928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 23:02:23.097820   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0213 23:02:23.097802   74928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0213 23:02:23.099279   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.102907   74928 out.go:177]   - Using image docker.io/busybox:stable
	I0213 23:02:23.103559   74928 addons.go:234] Setting addon default-storageclass=true in "addons-913502"
	I0213 23:02:23.101221   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0213 23:02:23.104497   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.104916   74928 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0213 23:02:23.104967   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:23.109605   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:23.112565   74928 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 23:02:23.112584   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0213 23:02:23.112637   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.112669   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0213 23:02:23.112749   74928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 23:02:23.120438   74928 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 23:02:23.120464   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0213 23:02:23.120547   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.114776   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0213 23:02:23.122602   74928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0213 23:02:23.128489   74928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0213 23:02:23.128511   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0213 23:02:23.128570   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.123911   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.127342   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.132510   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.133391   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.136234   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.145281   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.145869   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.150678   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.151507   74928 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:02:23.151522   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:02:23.151561   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:23.152797   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.160097   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	W0213 23:02:23.169014   74928 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0213 23:02:23.169044   74928 retry.go:31] will retry after 213.16075ms: ssh: handshake failed: EOF
	I0213 23:02:23.189261   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:23.192668   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:02:23.364619   74928 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:02:23.364720   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0213 23:02:23.387238   74928 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0213 23:02:23.387332   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0213 23:02:23.387243   74928 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0213 23:02:23.387432   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0213 23:02:23.390629   74928 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:02:23.390670   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:02:23.484241   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 23:02:23.563949   74928 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-913502" context rescaled to 1 replicas
	I0213 23:02:23.564077   74928 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:02:23.566218   74928 out.go:177] * Verifying Kubernetes components...
	I0213 23:02:23.567950   74928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:02:23.568779   74928 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0213 23:02:23.568829   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0213 23:02:23.575121   74928 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0213 23:02:23.575157   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0213 23:02:23.578358   74928 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0213 23:02:23.578381   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0213 23:02:23.580745   74928 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 23:02:23.580767   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0213 23:02:23.583468   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 23:02:23.585933   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0213 23:02:23.661184   74928 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:02:23.661288   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:02:23.662332   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:02:23.762737   74928 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0213 23:02:23.762828   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0213 23:02:23.763106   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 23:02:23.764021   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 23:02:23.779502   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 23:02:23.779758   74928 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0213 23:02:23.779805   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0213 23:02:23.781142   74928 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0213 23:02:23.781211   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0213 23:02:23.782278   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:02:23.965016   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:02:23.966343   74928 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0213 23:02:23.966412   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0213 23:02:23.968040   74928 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0213 23:02:23.968103   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0213 23:02:23.969633   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0213 23:02:24.178833   74928 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0213 23:02:24.178927   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0213 23:02:24.263228   74928 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0213 23:02:24.263323   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0213 23:02:24.282218   74928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0213 23:02:24.282299   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0213 23:02:24.675596   74928 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0213 23:02:24.675695   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0213 23:02:24.873617   74928 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0213 23:02:24.873708   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0213 23:02:24.883440   74928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0213 23:02:24.883470   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0213 23:02:24.962072   74928 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0213 23:02:24.962202   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0213 23:02:25.079877   74928 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0213 23:02:25.079973   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0213 23:02:25.178929   74928 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.986222793s)
	I0213 23:02:25.178970   74928 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0213 23:02:25.281008   74928 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0213 23:02:25.281100   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0213 23:02:25.462179   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0213 23:02:25.580592   74928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0213 23:02:25.580689   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0213 23:02:25.762351   74928 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 23:02:25.762459   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0213 23:02:25.867338   74928 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0213 23:02:25.867431   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0213 23:02:26.074969   74928 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0213 23:02:26.075073   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0213 23:02:26.181807   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 23:02:26.368405   74928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0213 23:02:26.368489   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0213 23:02:26.371468   74928 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 23:02:26.371539   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0213 23:02:26.682999   74928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0213 23:02:26.683075   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0213 23:02:26.763436   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 23:02:27.164865   74928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0213 23:02:27.164963   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0213 23:02:27.379169   74928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0213 23:02:27.379254   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0213 23:02:27.582716   74928 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 23:02:27.582754   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0213 23:02:27.768007   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 23:02:29.878176   74928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0213 23:02:29.878299   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:29.902147   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:29.980848   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.496480267s)
	I0213 23:02:29.980901   74928 addons.go:470] Verifying addon ingress=true in "addons-913502"
	I0213 23:02:29.982345   74928 out.go:177] * Verifying ingress addon...
	I0213 23:02:29.981085   74928 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.413066893s)
	I0213 23:02:29.981193   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.39769359s)
	I0213 23:02:29.981243   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.39523503s)
	I0213 23:02:29.981291   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.318879963s)
	I0213 23:02:29.981340   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.218176586s)
	I0213 23:02:29.981383   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.217336657s)
	I0213 23:02:29.981422   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.201784987s)
	I0213 23:02:29.981506   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.198981444s)
	I0213 23:02:29.981547   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.016503521s)
	I0213 23:02:29.981578   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.011857893s)
	I0213 23:02:29.981618   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.519338766s)
	I0213 23:02:29.981715   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.799822945s)
	I0213 23:02:29.981772   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.218245521s)
	I0213 23:02:29.984635   74928 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0213 23:02:29.984872   74928 addons.go:470] Verifying addon metrics-server=true in "addons-913502"
	I0213 23:02:29.984898   74928 addons.go:470] Verifying addon registry=true in "addons-913502"
	I0213 23:02:29.988104   74928 out.go:177] * Verifying registry addon...
	I0213 23:02:29.989644   74928 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-913502 service yakd-dashboard -n yakd-dashboard
	
	W0213 23:02:29.985071   74928 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 23:02:29.985575   74928 node_ready.go:35] waiting up to 6m0s for node "addons-913502" to be "Ready" ...
	I0213 23:02:29.991716   74928 retry.go:31] will retry after 138.392466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 23:02:29.994778   74928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0213 23:02:30.063610   74928 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0213 23:02:30.063638   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:30.066068   74928 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0213 23:02:30.066133   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0213 23:02:30.067017   74928 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0213 23:02:30.133078   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 23:02:30.144860   74928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0213 23:02:30.176952   74928 addons.go:234] Setting addon gcp-auth=true in "addons-913502"
	I0213 23:02:30.177029   74928 host.go:66] Checking if "addons-913502" exists ...
	I0213 23:02:30.177460   74928 cli_runner.go:164] Run: docker container inspect addons-913502 --format={{.State.Status}}
	I0213 23:02:30.201007   74928 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0213 23:02:30.201079   74928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-913502
	I0213 23:02:30.220078   74928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/addons-913502/id_rsa Username:docker}
	I0213 23:02:30.488245   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:30.499069   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:30.982712   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.214645965s)
	I0213 23:02:30.982756   74928 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-913502"
	I0213 23:02:30.985542   74928 out.go:177] * Verifying csi-hostpath-driver addon...
	I0213 23:02:30.988078   74928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0213 23:02:30.988511   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:30.992836   74928 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0213 23:02:30.992860   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:30.997896   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:31.263697   74928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.130533125s)
	I0213 23:02:31.263744   74928 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.062704974s)
	I0213 23:02:31.265775   74928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 23:02:31.267270   74928 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0213 23:02:31.268627   74928 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0213 23:02:31.268650   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0213 23:02:31.285679   74928 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0213 23:02:31.285706   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0213 23:02:31.301953   74928 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 23:02:31.301976   74928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0213 23:02:31.318463   74928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 23:02:31.489837   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:31.492840   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:31.500586   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:31.768196   74928 addons.go:470] Verifying addon gcp-auth=true in "addons-913502"
	I0213 23:02:31.770017   74928 out.go:177] * Verifying gcp-auth addon...
	I0213 23:02:31.772404   74928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0213 23:02:31.775066   74928 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0213 23:02:31.775086   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:31.988757   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:31.993045   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:32.010923   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:32.012048   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:32.276134   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:32.489242   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:32.491967   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:32.498625   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:32.776548   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:32.988948   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:32.991643   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:32.998362   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:33.276706   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:33.568220   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:33.568639   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:33.569337   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:33.775962   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:33.989717   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:33.992957   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:33.998584   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:34.276538   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:34.564927   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:34.566203   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:34.567003   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:34.568645   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:34.776751   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:34.990868   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:34.994129   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:34.999067   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:35.277284   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:35.489832   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:35.492724   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:35.498427   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:35.776948   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:35.989013   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:35.992193   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:35.999151   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:36.277123   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:36.488777   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:36.492466   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:36.498976   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:36.776433   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:36.989118   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:36.991704   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:36.994633   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:36.999086   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:37.276402   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:37.489274   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:37.491868   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:37.498371   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:37.776537   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:37.989294   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:37.991948   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:37.997920   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:38.275919   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:38.488500   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:38.491750   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:38.497858   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:38.776298   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:38.989352   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:38.991620   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:38.994795   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:38.997867   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:39.276196   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:39.488986   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:39.492912   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:39.497960   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:39.777073   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:39.988750   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:39.992266   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:39.997861   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:40.278608   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:40.488739   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:40.492429   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:40.498056   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:40.776349   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:40.989541   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:40.992147   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:40.994911   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:40.998314   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:41.276465   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:41.489115   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:41.491745   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:41.498063   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:41.776223   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:41.989203   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:41.992603   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:41.997783   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:42.275894   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:42.488422   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:42.491821   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:42.497739   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:42.776101   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:42.988855   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:42.992378   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:43.000152   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:43.276474   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:43.489015   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:43.491441   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:43.494388   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:43.498084   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:43.776658   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:43.988288   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:43.991547   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:43.998261   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:44.276372   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:44.488896   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:44.491644   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:44.498464   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:44.776532   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:44.989088   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:44.991493   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:44.998005   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:45.276191   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:45.488607   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:45.491876   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:45.497817   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:45.776027   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:45.988622   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:45.992102   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:45.994667   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:45.998396   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:46.276528   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:46.488596   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:46.491740   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:46.498749   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:46.776011   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:46.988972   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:46.992278   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:46.998016   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:47.276390   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:47.489274   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:47.491599   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:47.498221   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:47.776109   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:47.988770   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:47.991917   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:47.998108   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:48.276233   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:48.489071   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:48.491556   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:48.494551   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:48.498067   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:48.776156   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:48.989232   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:48.991795   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:48.998784   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:49.276162   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:49.488644   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:49.492302   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:49.498751   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:49.775921   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:49.988269   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:49.991470   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:49.997979   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:50.276130   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:50.488253   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:50.491560   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:50.498064   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:50.776234   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:50.988621   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:50.991756   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:50.993946   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:50.998748   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:51.275941   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:51.488758   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:51.492380   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:51.498252   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:51.776635   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:51.988856   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:51.992028   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:51.998634   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:52.275774   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:52.489878   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:52.492417   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:52.497802   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:52.775936   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:52.988344   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:52.991580   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:52.994617   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:52.998237   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:53.276532   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:53.489007   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:53.491485   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:53.497881   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:53.776240   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:53.989145   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:53.991414   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:53.998472   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:54.276494   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:54.489628   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:54.491806   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:54.497928   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:54.776011   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:54.988525   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:54.991612   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:54.998115   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:55.276636   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:55.489205   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:55.491374   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:55.494330   74928 node_ready.go:58] node "addons-913502" has status "Ready":"False"
	I0213 23:02:55.497840   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:55.775886   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:55.988988   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:55.992887   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:55.998088   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:56.276285   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:56.489013   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:56.492849   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:56.498657   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:56.775866   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:56.988357   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:56.992015   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:56.998177   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:57.280070   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:57.489235   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:57.493840   74928 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0213 23:02:57.493868   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:57.495082   74928 node_ready.go:49] node "addons-913502" has status "Ready":"True"
	I0213 23:02:57.495106   74928 node_ready.go:38] duration metric: took 27.50339897s waiting for node "addons-913502" to be "Ready" ...
	I0213 23:02:57.495119   74928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:02:57.498618   74928 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0213 23:02:57.498637   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:57.503479   74928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kw9vb" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:57.776437   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:57.990216   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:57.994691   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:58.067730   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:58.277049   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:58.490459   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:58.494500   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:58.564373   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:58.776246   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:58.989822   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:58.993568   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:59.000232   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:59.009640   74928 pod_ready.go:92] pod "coredns-5dd5756b68-kw9vb" in "kube-system" namespace has status "Ready":"True"
	I0213 23:02:59.009682   74928 pod_ready.go:81] duration metric: took 1.50617685s waiting for pod "coredns-5dd5756b68-kw9vb" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.009715   74928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.015867   74928 pod_ready.go:92] pod "etcd-addons-913502" in "kube-system" namespace has status "Ready":"True"
	I0213 23:02:59.016025   74928 pod_ready.go:81] duration metric: took 6.292931ms waiting for pod "etcd-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.016069   74928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.021130   74928 pod_ready.go:92] pod "kube-apiserver-addons-913502" in "kube-system" namespace has status "Ready":"True"
	I0213 23:02:59.021151   74928 pod_ready.go:81] duration metric: took 5.051562ms waiting for pod "kube-apiserver-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.021161   74928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.025792   74928 pod_ready.go:92] pod "kube-controller-manager-addons-913502" in "kube-system" namespace has status "Ready":"True"
	I0213 23:02:59.025814   74928 pod_ready.go:81] duration metric: took 4.647644ms waiting for pod "kube-controller-manager-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.025825   74928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dd5xd" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.096511   74928 pod_ready.go:92] pod "kube-proxy-dd5xd" in "kube-system" namespace has status "Ready":"True"
	I0213 23:02:59.096538   74928 pod_ready.go:81] duration metric: took 70.705683ms waiting for pod "kube-proxy-dd5xd" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.096551   74928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.277144   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:59.489134   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:59.493577   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:59.495254   74928 pod_ready.go:92] pod "kube-scheduler-addons-913502" in "kube-system" namespace has status "Ready":"True"
	I0213 23:02:59.495277   74928 pod_ready.go:81] duration metric: took 398.717851ms waiting for pod "kube-scheduler-addons-913502" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.495290   74928 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-jv886" in "kube-system" namespace to be "Ready" ...
	I0213 23:02:59.498215   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:02:59.776473   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:02:59.989407   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:02:59.993428   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:02:59.999344   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:00.276454   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:00.489459   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:00.493607   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:00.499788   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:00.776214   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:00.988640   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:00.993118   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:00.999265   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:01.276373   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:01.489059   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:01.493873   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:01.499773   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:01.501201   74928 pod_ready.go:102] pod "metrics-server-69cf46c98-jv886" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:01.776629   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:01.990896   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:01.995167   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:01.999268   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:02.276433   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:02.490179   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:02.494417   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:02.499522   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:02.777269   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:02.989030   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:02.992998   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:03.063872   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:03.276863   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:03.488623   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:03.493630   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:03.499581   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:03.776246   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:03.989471   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:03.992793   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:03.998620   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:04.000709   74928 pod_ready.go:102] pod "metrics-server-69cf46c98-jv886" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:04.276802   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:04.489954   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:04.498124   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:04.573570   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:04.779890   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:04.991010   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:04.996262   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:05.063797   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:05.074497   74928 pod_ready.go:92] pod "metrics-server-69cf46c98-jv886" in "kube-system" namespace has status "Ready":"True"
	I0213 23:03:05.074524   74928 pod_ready.go:81] duration metric: took 5.579225583s waiting for pod "metrics-server-69cf46c98-jv886" in "kube-system" namespace to be "Ready" ...
	I0213 23:03:05.074538   74928 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace to be "Ready" ...
	I0213 23:03:05.276563   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:05.490088   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:05.494009   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:05.499509   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:05.776388   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:05.989429   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:05.992927   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:05.999260   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:06.276516   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:06.489262   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:06.493786   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:06.500276   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:06.777065   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:06.989395   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:06.995048   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:06.998908   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:07.080790   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:07.276118   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:07.489370   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:07.494330   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:07.499432   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:07.775778   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:07.989798   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:07.993149   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:07.999099   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:08.276524   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:08.490020   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:08.493124   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:08.499013   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:08.783920   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:08.989552   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:08.993279   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:09.062772   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:09.081649   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:09.276775   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:09.490224   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:09.494229   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:09.498742   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:09.775923   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:09.988494   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:09.993209   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:09.999395   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:10.275851   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:10.488901   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:10.493997   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:10.499846   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:10.776718   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:10.990162   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:11.064604   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:11.066019   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:11.081911   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:11.277010   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:11.489970   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:11.493571   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:11.499981   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:11.777250   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:11.989224   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:11.993444   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:11.999431   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:12.276192   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:12.490420   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:12.493751   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:12.498823   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:12.776728   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:12.989754   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:12.993655   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:13.000565   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:13.276164   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:13.489285   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:13.493196   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:13.498860   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:13.581546   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:13.777901   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:14.073081   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:14.074353   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:14.075044   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:14.276581   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:14.491424   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:14.562670   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:14.565505   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:14.776000   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:14.990208   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:14.994885   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:15.004998   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:15.275915   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:15.489615   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:15.493465   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:15.499534   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:15.776627   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:15.990303   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:15.994920   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:15.999508   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:16.082357   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:16.276625   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:16.489789   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:16.493925   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:16.499830   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:16.776554   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:16.990502   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:16.994367   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:16.999762   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:17.277285   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:17.489591   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:17.494173   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:17.500170   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:17.777163   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:17.989528   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:17.992916   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:17.998693   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:18.084076   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:18.275964   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:18.489466   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:18.492520   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:18.499761   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:18.775566   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:18.989542   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:18.992909   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:18.999047   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:19.275923   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:19.490371   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:19.494521   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:19.499214   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:19.776217   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:19.991417   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:19.994075   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:19.999328   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:20.276240   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:20.488988   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:20.493622   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:20.499245   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:20.581713   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:20.775853   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:20.989494   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:20.992805   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:20.998636   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 23:03:21.276731   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:21.489815   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:21.495782   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:21.499626   74928 kapi.go:107] duration metric: took 51.504843171s to wait for kubernetes.io/minikube-addons=registry ...
	I0213 23:03:21.776301   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:21.989424   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:21.992867   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:22.275738   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:22.489608   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:22.493662   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:22.776894   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:22.988976   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:22.997020   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:23.080386   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:23.276801   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:23.492180   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:23.496237   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:23.777153   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:23.989514   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:23.993837   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:24.276715   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:24.490054   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:24.493615   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:24.776523   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:24.989428   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:24.993475   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:25.081038   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:25.276737   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:25.490306   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:25.494154   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:25.776533   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:25.990427   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:25.993572   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:26.276625   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:26.489638   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:26.493258   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:26.777050   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:26.989143   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:26.993942   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:27.081166   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:27.276550   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:27.489536   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:27.492536   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:27.776690   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:27.989478   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:27.992784   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:28.276575   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:28.489131   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:28.492773   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:28.776620   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:28.989419   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:28.992736   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:29.082442   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:29.276004   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:29.489755   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:29.493307   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:29.777364   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:30.080543   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:30.081767   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:30.362272   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:30.491911   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:30.564414   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:30.777338   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:30.989889   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:30.993703   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:31.308308   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:31.492979   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:31.494146   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:31.581931   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:31.775929   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:31.989593   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:31.993989   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:32.276473   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:32.490090   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:32.494609   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:32.777555   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:32.990148   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:32.994441   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:33.276694   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:33.489411   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:33.493097   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:33.776089   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:33.989014   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:33.993222   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:34.081800   74928 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"False"
	I0213 23:03:34.275667   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:34.490274   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:34.493636   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:34.776270   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:34.990020   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:34.993233   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:35.276640   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:35.489609   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:35.493145   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:35.581013   74928 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace has status "Ready":"True"
	I0213 23:03:35.581037   74928 pod_ready.go:81] duration metric: took 30.506491278s waiting for pod "nvidia-device-plugin-daemonset-nwpfp" in "kube-system" namespace to be "Ready" ...
	I0213 23:03:35.581055   74928 pod_ready.go:38] duration metric: took 38.085921931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:03:35.581075   74928 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:03:35.581147   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:03:35.581278   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:03:35.617186   74928 cri.go:89] found id: "8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27"
	I0213 23:03:35.617210   74928 cri.go:89] found id: ""
	I0213 23:03:35.617220   74928 logs.go:276] 1 containers: [8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27]
	I0213 23:03:35.617281   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:35.621317   74928 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:03:35.621387   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:03:35.684656   74928 cri.go:89] found id: "01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775"
	I0213 23:03:35.684682   74928 cri.go:89] found id: ""
	I0213 23:03:35.684693   74928 logs.go:276] 1 containers: [01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775]
	I0213 23:03:35.684746   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:35.688029   74928 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:03:35.688086   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:03:35.722548   74928 cri.go:89] found id: "dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03"
	I0213 23:03:35.722574   74928 cri.go:89] found id: ""
	I0213 23:03:35.722583   74928 logs.go:276] 1 containers: [dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03]
	I0213 23:03:35.722640   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:35.762552   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:03:35.762635   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:03:35.776969   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:35.800502   74928 cri.go:89] found id: "65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6"
	I0213 23:03:35.800532   74928 cri.go:89] found id: ""
	I0213 23:03:35.800544   74928 logs.go:276] 1 containers: [65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6]
	I0213 23:03:35.800599   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:35.804048   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:03:35.804109   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:03:35.863719   74928 cri.go:89] found id: "a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f"
	I0213 23:03:35.863751   74928 cri.go:89] found id: ""
	I0213 23:03:35.863762   74928 logs.go:276] 1 containers: [a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f]
	I0213 23:03:35.863830   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:35.867587   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:03:35.867656   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:03:35.902274   74928 cri.go:89] found id: "926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4"
	I0213 23:03:35.902301   74928 cri.go:89] found id: ""
	I0213 23:03:35.902312   74928 logs.go:276] 1 containers: [926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4]
	I0213 23:03:35.902366   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:35.905836   74928 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:03:35.905896   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:03:35.979706   74928 cri.go:89] found id: "2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f"
	I0213 23:03:35.979730   74928 cri.go:89] found id: ""
	I0213 23:03:35.979737   74928 logs.go:276] 1 containers: [2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f]
	I0213 23:03:35.979783   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:35.982995   74928 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:03:35.983017   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:03:35.988345   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:35.992709   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:36.054384   74928 logs.go:123] Gathering logs for etcd [01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775] ...
	I0213 23:03:36.054423   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775"
	I0213 23:03:36.092394   74928 logs.go:123] Gathering logs for kube-scheduler [65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6] ...
	I0213 23:03:36.092425   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6"
	I0213 23:03:36.130467   74928 logs.go:123] Gathering logs for kube-proxy [a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f] ...
	I0213 23:03:36.130501   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f"
	I0213 23:03:36.163074   74928 logs.go:123] Gathering logs for kindnet [2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f] ...
	I0213 23:03:36.163107   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f"
	I0213 23:03:36.198192   74928 logs.go:123] Gathering logs for coredns [dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03] ...
	I0213 23:03:36.198233   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03"
	I0213 23:03:36.244845   74928 logs.go:123] Gathering logs for kube-controller-manager [926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4] ...
	I0213 23:03:36.244890   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4"
	I0213 23:03:36.276588   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:36.301000   74928 logs.go:123] Gathering logs for container status ...
	I0213 23:03:36.301041   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:03:36.341482   74928 logs.go:123] Gathering logs for kubelet ...
	I0213 23:03:36.341518   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:03:36.389644   74928 logs.go:138] Found kubelet problem: Feb 13 23:02:29 addons-913502 kubelet[1554]: W0213 23:02:29.584745    1554 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	W0213 23:03:36.389843   74928 logs.go:138] Found kubelet problem: Feb 13 23:02:29 addons-913502 kubelet[1554]: E0213 23:02:29.584796    1554 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	I0213 23:03:36.421684   74928 logs.go:123] Gathering logs for dmesg ...
	I0213 23:03:36.421726   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:03:36.435954   74928 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:03:36.435985   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:03:36.489938   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:36.494188   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:36.539605   74928 logs.go:123] Gathering logs for kube-apiserver [8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27] ...
	I0213 23:03:36.539642   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27"
	I0213 23:03:36.586543   74928 out.go:304] Setting ErrFile to fd 2...
	I0213 23:03:36.586586   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:03:36.586663   74928 out.go:239] X Problems detected in kubelet:
	W0213 23:03:36.586679   74928 out.go:239]   Feb 13 23:02:29 addons-913502 kubelet[1554]: W0213 23:02:29.584745    1554 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	W0213 23:03:36.586691   74928 out.go:239]   Feb 13 23:02:29 addons-913502 kubelet[1554]: E0213 23:02:29.584796    1554 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	I0213 23:03:36.586708   74928 out.go:304] Setting ErrFile to fd 2...
	I0213 23:03:36.586762   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:03:36.776621   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:36.989572   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:36.993091   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:37.276421   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:37.489710   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:37.492883   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:37.777172   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:37.989549   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:37.992840   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:38.276607   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:38.489509   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:38.493699   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:38.775985   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 23:03:38.989519   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:38.993401   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:39.277443   74928 kapi.go:107] duration metric: took 1m7.505036996s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0213 23:03:39.279241   74928 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-913502 cluster.
	I0213 23:03:39.280680   74928 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0213 23:03:39.282325   74928 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0213 23:03:39.490640   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:39.493505   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:39.989163   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:39.993580   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:40.573519   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:40.575385   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:40.989337   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:41.077821   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:41.567019   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:41.567603   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:41.989975   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:42.065246   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:42.489447   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:42.493849   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:42.989720   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:42.993375   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:43.489699   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:43.493378   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:43.989488   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:43.993191   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:44.489277   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:44.494104   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:44.989106   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:44.994890   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:45.489629   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:45.493461   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:45.988693   74928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 23:03:45.993714   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:46.489926   74928 kapi.go:107] duration metric: took 1m16.505286816s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0213 23:03:46.493301   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:46.587982   74928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:03:46.601144   74928 api_server.go:72] duration metric: took 1m23.036993681s to wait for apiserver process to appear ...
	I0213 23:03:46.601179   74928 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:03:46.601221   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:03:46.601273   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:03:46.635621   74928 cri.go:89] found id: "8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27"
	I0213 23:03:46.635654   74928 cri.go:89] found id: ""
	I0213 23:03:46.635682   74928 logs.go:276] 1 containers: [8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27]
	I0213 23:03:46.635744   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:46.639090   74928 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:03:46.639147   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:03:46.675663   74928 cri.go:89] found id: "01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775"
	I0213 23:03:46.675694   74928 cri.go:89] found id: ""
	I0213 23:03:46.675704   74928 logs.go:276] 1 containers: [01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775]
	I0213 23:03:46.675764   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:46.679390   74928 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:03:46.679470   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:03:46.767335   74928 cri.go:89] found id: "dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03"
	I0213 23:03:46.767369   74928 cri.go:89] found id: ""
	I0213 23:03:46.767380   74928 logs.go:276] 1 containers: [dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03]
	I0213 23:03:46.767439   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:46.770887   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:03:46.770968   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:03:46.808198   74928 cri.go:89] found id: "65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6"
	I0213 23:03:46.808232   74928 cri.go:89] found id: ""
	I0213 23:03:46.808244   74928 logs.go:276] 1 containers: [65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6]
	I0213 23:03:46.808308   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:46.812865   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:03:46.812934   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:03:46.889497   74928 cri.go:89] found id: "a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f"
	I0213 23:03:46.889523   74928 cri.go:89] found id: ""
	I0213 23:03:46.889534   74928 logs.go:276] 1 containers: [a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f]
	I0213 23:03:46.889597   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:46.893092   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:03:46.893171   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:03:47.061996   74928 cri.go:89] found id: "926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4"
	I0213 23:03:47.062026   74928 cri.go:89] found id: ""
	I0213 23:03:47.062037   74928 logs.go:276] 1 containers: [926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4]
	I0213 23:03:47.062094   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:47.067232   74928 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:03:47.067305   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:03:47.068506   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:47.104504   74928 cri.go:89] found id: "2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f"
	I0213 23:03:47.104534   74928 cri.go:89] found id: ""
	I0213 23:03:47.104545   74928 logs.go:276] 1 containers: [2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f]
	I0213 23:03:47.104681   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:47.161685   74928 logs.go:123] Gathering logs for etcd [01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775] ...
	I0213 23:03:47.161789   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775"
	I0213 23:03:47.277306   74928 logs.go:123] Gathering logs for coredns [dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03] ...
	I0213 23:03:47.277343   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03"
	I0213 23:03:47.330898   74928 logs.go:123] Gathering logs for kube-controller-manager [926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4] ...
	I0213 23:03:47.330949   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4"
	I0213 23:03:47.427398   74928 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:03:47.427439   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:03:47.493781   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:47.534906   74928 logs.go:123] Gathering logs for kubelet ...
	I0213 23:03:47.534951   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:03:47.597132   74928 logs.go:138] Found kubelet problem: Feb 13 23:02:29 addons-913502 kubelet[1554]: W0213 23:02:29.584745    1554 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	W0213 23:03:47.597395   74928 logs.go:138] Found kubelet problem: Feb 13 23:02:29 addons-913502 kubelet[1554]: E0213 23:02:29.584796    1554 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	I0213 23:03:47.633132   74928 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:03:47.633174   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:03:47.736670   74928 logs.go:123] Gathering logs for kube-scheduler [65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6] ...
	I0213 23:03:47.736708   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6"
	I0213 23:03:47.776447   74928 logs.go:123] Gathering logs for kube-proxy [a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f] ...
	I0213 23:03:47.776496   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f"
	I0213 23:03:47.815312   74928 logs.go:123] Gathering logs for kindnet [2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f] ...
	I0213 23:03:47.815351   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f"
	I0213 23:03:47.867670   74928 logs.go:123] Gathering logs for container status ...
	I0213 23:03:47.867703   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:03:47.914077   74928 logs.go:123] Gathering logs for dmesg ...
	I0213 23:03:47.914124   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:03:47.929822   74928 logs.go:123] Gathering logs for kube-apiserver [8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27] ...
	I0213 23:03:47.929861   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27"
	I0213 23:03:47.979183   74928 out.go:304] Setting ErrFile to fd 2...
	I0213 23:03:47.979225   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:03:47.979306   74928 out.go:239] X Problems detected in kubelet:
	W0213 23:03:47.979319   74928 out.go:239]   Feb 13 23:02:29 addons-913502 kubelet[1554]: W0213 23:02:29.584745    1554 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	W0213 23:03:47.979332   74928 out.go:239]   Feb 13 23:02:29 addons-913502 kubelet[1554]: E0213 23:02:29.584796    1554 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	I0213 23:03:47.979349   74928 out.go:304] Setting ErrFile to fd 2...
	I0213 23:03:47.979359   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:03:47.993988   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:48.494305   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:48.994483   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:49.494435   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:49.994370   74928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 23:03:50.493478   74928 kapi.go:107] duration metric: took 1m19.505403701s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0213 23:03:50.495818   74928 out.go:177] * Enabled addons: inspektor-gadget, ingress-dns, helm-tiller, metrics-server, storage-provisioner, cloud-spanner, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0213 23:03:50.497400   74928 addons.go:505] enable addons completed in 1m27.513926194s: enabled=[inspektor-gadget ingress-dns helm-tiller metrics-server storage-provisioner cloud-spanner nvidia-device-plugin yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0213 23:03:57.979582   74928 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0213 23:03:57.984099   74928 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0213 23:03:57.985407   74928 api_server.go:141] control plane version: v1.28.4
	I0213 23:03:57.985438   74928 api_server.go:131] duration metric: took 11.38425193s to wait for apiserver health ...
	I0213 23:03:57.985447   74928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:03:57.985471   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:03:57.985531   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:03:58.019038   74928 cri.go:89] found id: "8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27"
	I0213 23:03:58.019066   74928 cri.go:89] found id: ""
	I0213 23:03:58.019075   74928 logs.go:276] 1 containers: [8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27]
	I0213 23:03:58.019121   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:58.022558   74928 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:03:58.022622   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:03:58.055009   74928 cri.go:89] found id: "01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775"
	I0213 23:03:58.055036   74928 cri.go:89] found id: ""
	I0213 23:03:58.055045   74928 logs.go:276] 1 containers: [01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775]
	I0213 23:03:58.055091   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:58.058387   74928 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:03:58.058440   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:03:58.094250   74928 cri.go:89] found id: "dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03"
	I0213 23:03:58.094277   74928 cri.go:89] found id: ""
	I0213 23:03:58.094287   74928 logs.go:276] 1 containers: [dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03]
	I0213 23:03:58.094344   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:58.097864   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:03:58.097924   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:03:58.130542   74928 cri.go:89] found id: "65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6"
	I0213 23:03:58.130571   74928 cri.go:89] found id: ""
	I0213 23:03:58.130582   74928 logs.go:276] 1 containers: [65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6]
	I0213 23:03:58.130630   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:58.134062   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:03:58.134130   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:03:58.167942   74928 cri.go:89] found id: "a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f"
	I0213 23:03:58.167974   74928 cri.go:89] found id: ""
	I0213 23:03:58.167988   74928 logs.go:276] 1 containers: [a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f]
	I0213 23:03:58.168051   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:58.171436   74928 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:03:58.171490   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:03:58.204459   74928 cri.go:89] found id: "926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4"
	I0213 23:03:58.204485   74928 cri.go:89] found id: ""
	I0213 23:03:58.204493   74928 logs.go:276] 1 containers: [926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4]
	I0213 23:03:58.204542   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:58.207790   74928 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:03:58.207857   74928 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:03:58.240251   74928 cri.go:89] found id: "2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f"
	I0213 23:03:58.240279   74928 cri.go:89] found id: ""
	I0213 23:03:58.240288   74928 logs.go:276] 1 containers: [2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f]
	I0213 23:03:58.240362   74928 ssh_runner.go:195] Run: which crictl
	I0213 23:03:58.243623   74928 logs.go:123] Gathering logs for kubelet ...
	I0213 23:03:58.243645   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:03:58.292090   74928 logs.go:138] Found kubelet problem: Feb 13 23:02:29 addons-913502 kubelet[1554]: W0213 23:02:29.584745    1554 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	W0213 23:03:58.292256   74928 logs.go:138] Found kubelet problem: Feb 13 23:02:29 addons-913502 kubelet[1554]: E0213 23:02:29.584796    1554 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	I0213 23:03:58.326318   74928 logs.go:123] Gathering logs for coredns [dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03] ...
	I0213 23:03:58.326354   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03"
	I0213 23:03:58.372141   74928 logs.go:123] Gathering logs for kube-scheduler [65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6] ...
	I0213 23:03:58.372174   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6"
	I0213 23:03:58.409898   74928 logs.go:123] Gathering logs for kube-controller-manager [926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4] ...
	I0213 23:03:58.409931   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4"
	I0213 23:03:58.467275   74928 logs.go:123] Gathering logs for kindnet [2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f] ...
	I0213 23:03:58.467318   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f"
	I0213 23:03:58.500052   74928 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:03:58.500084   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:03:58.568626   74928 logs.go:123] Gathering logs for container status ...
	I0213 23:03:58.568667   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:03:58.614583   74928 logs.go:123] Gathering logs for dmesg ...
	I0213 23:03:58.614623   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:03:58.629254   74928 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:03:58.629290   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:03:58.732800   74928 logs.go:123] Gathering logs for kube-apiserver [8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27] ...
	I0213 23:03:58.732840   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27"
	I0213 23:03:58.778150   74928 logs.go:123] Gathering logs for etcd [01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775] ...
	I0213 23:03:58.778189   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775"
	I0213 23:03:58.817869   74928 logs.go:123] Gathering logs for kube-proxy [a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f] ...
	I0213 23:03:58.868520   74928 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f"
	I0213 23:03:58.902761   74928 out.go:304] Setting ErrFile to fd 2...
	I0213 23:03:58.902788   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:03:58.902840   74928 out.go:239] X Problems detected in kubelet:
	W0213 23:03:58.902856   74928 out.go:239]   Feb 13 23:02:29 addons-913502 kubelet[1554]: W0213 23:02:29.584745    1554 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	W0213 23:03:58.902868   74928 out.go:239]   Feb 13 23:02:29 addons-913502 kubelet[1554]: E0213 23:02:29.584796    1554 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-913502" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-913502' and this object
	I0213 23:03:58.902876   74928 out.go:304] Setting ErrFile to fd 2...
	I0213 23:03:58.902886   74928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:04:08.913088   74928 system_pods.go:59] 19 kube-system pods found
	I0213 23:04:08.913145   74928 system_pods.go:61] "coredns-5dd5756b68-kw9vb" [a45ecc94-1a7d-4b30-b9a1-4b013b7d85b6] Running
	I0213 23:04:08.913157   74928 system_pods.go:61] "csi-hostpath-attacher-0" [333cc586-eff2-4cf0-8661-dc7965e732c2] Running
	I0213 23:04:08.913164   74928 system_pods.go:61] "csi-hostpath-resizer-0" [3ff50dd0-ac61-42cd-bd4e-22c24a151d0b] Running
	I0213 23:04:08.913170   74928 system_pods.go:61] "csi-hostpathplugin-xhbgc" [3fa876b5-9f44-4de9-962e-81019a9b2450] Running
	I0213 23:04:08.913177   74928 system_pods.go:61] "etcd-addons-913502" [16dd7c79-5009-4cf9-903d-3578ea264a77] Running
	I0213 23:04:08.913182   74928 system_pods.go:61] "kindnet-x9mvr" [1d27f1a1-ead6-417c-a8ed-95aa215acc35] Running
	I0213 23:04:08.913199   74928 system_pods.go:61] "kube-apiserver-addons-913502" [035e932c-9b8c-4b51-8c74-d4d18bb66b65] Running
	I0213 23:04:08.913204   74928 system_pods.go:61] "kube-controller-manager-addons-913502" [a6e43d1f-af03-4c52-8f75-1758daee7a5a] Running
	I0213 23:04:08.913212   74928 system_pods.go:61] "kube-ingress-dns-minikube" [463f7dd2-ee46-408c-8790-4a7318e64279] Running
	I0213 23:04:08.913218   74928 system_pods.go:61] "kube-proxy-dd5xd" [89f747ec-71d3-403d-85f7-68278485ca5f] Running
	I0213 23:04:08.913222   74928 system_pods.go:61] "kube-scheduler-addons-913502" [69f21cdb-2661-48b6-905c-88847a110480] Running
	I0213 23:04:08.913229   74928 system_pods.go:61] "metrics-server-69cf46c98-jv886" [0721b3c3-1074-430f-8fcf-1a0a987218e0] Running
	I0213 23:04:08.913235   74928 system_pods.go:61] "nvidia-device-plugin-daemonset-nwpfp" [61a6a604-82d9-4f32-9be6-9b58ec3b2930] Running
	I0213 23:04:08.913242   74928 system_pods.go:61] "registry-proxy-6fcvz" [fd7e7f0f-51ba-46ce-8a59-f33819b6a633] Running
	I0213 23:04:08.913246   74928 system_pods.go:61] "registry-zd97h" [4c64ca96-e524-479f-b3b1-8e37e19bf37e] Running
	I0213 23:04:08.913250   74928 system_pods.go:61] "snapshot-controller-58dbcc7b99-8gqrl" [64679169-f689-4c5d-b89e-dbb3e7095f26] Running
	I0213 23:04:08.913256   74928 system_pods.go:61] "snapshot-controller-58dbcc7b99-k45hw" [7b5af714-4b23-43e2-9b68-0a03e729a563] Running
	I0213 23:04:08.913260   74928 system_pods.go:61] "storage-provisioner" [32af8c0c-00a1-4f92-afdb-fffc86fe3219] Running
	I0213 23:04:08.913265   74928 system_pods.go:61] "tiller-deploy-7b677967b9-hd46l" [fe48a6c2-5ee5-4f2e-afc7-bdf16742cbfe] Running
	I0213 23:04:08.913274   74928 system_pods.go:74] duration metric: took 10.927820338s to wait for pod list to return data ...
	I0213 23:04:08.913284   74928 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:04:08.915405   74928 default_sa.go:45] found service account: "default"
	I0213 23:04:08.915430   74928 default_sa.go:55] duration metric: took 2.136757ms for default service account to be created ...
	I0213 23:04:08.915441   74928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:04:08.923119   74928 system_pods.go:86] 19 kube-system pods found
	I0213 23:04:08.923153   74928 system_pods.go:89] "coredns-5dd5756b68-kw9vb" [a45ecc94-1a7d-4b30-b9a1-4b013b7d85b6] Running
	I0213 23:04:08.923162   74928 system_pods.go:89] "csi-hostpath-attacher-0" [333cc586-eff2-4cf0-8661-dc7965e732c2] Running
	I0213 23:04:08.923168   74928 system_pods.go:89] "csi-hostpath-resizer-0" [3ff50dd0-ac61-42cd-bd4e-22c24a151d0b] Running
	I0213 23:04:08.923174   74928 system_pods.go:89] "csi-hostpathplugin-xhbgc" [3fa876b5-9f44-4de9-962e-81019a9b2450] Running
	I0213 23:04:08.923179   74928 system_pods.go:89] "etcd-addons-913502" [16dd7c79-5009-4cf9-903d-3578ea264a77] Running
	I0213 23:04:08.923184   74928 system_pods.go:89] "kindnet-x9mvr" [1d27f1a1-ead6-417c-a8ed-95aa215acc35] Running
	I0213 23:04:08.923191   74928 system_pods.go:89] "kube-apiserver-addons-913502" [035e932c-9b8c-4b51-8c74-d4d18bb66b65] Running
	I0213 23:04:08.923203   74928 system_pods.go:89] "kube-controller-manager-addons-913502" [a6e43d1f-af03-4c52-8f75-1758daee7a5a] Running
	I0213 23:04:08.923211   74928 system_pods.go:89] "kube-ingress-dns-minikube" [463f7dd2-ee46-408c-8790-4a7318e64279] Running
	I0213 23:04:08.923217   74928 system_pods.go:89] "kube-proxy-dd5xd" [89f747ec-71d3-403d-85f7-68278485ca5f] Running
	I0213 23:04:08.923227   74928 system_pods.go:89] "kube-scheduler-addons-913502" [69f21cdb-2661-48b6-905c-88847a110480] Running
	I0213 23:04:08.923235   74928 system_pods.go:89] "metrics-server-69cf46c98-jv886" [0721b3c3-1074-430f-8fcf-1a0a987218e0] Running
	I0213 23:04:08.923243   74928 system_pods.go:89] "nvidia-device-plugin-daemonset-nwpfp" [61a6a604-82d9-4f32-9be6-9b58ec3b2930] Running
	I0213 23:04:08.923260   74928 system_pods.go:89] "registry-proxy-6fcvz" [fd7e7f0f-51ba-46ce-8a59-f33819b6a633] Running
	I0213 23:04:08.923266   74928 system_pods.go:89] "registry-zd97h" [4c64ca96-e524-479f-b3b1-8e37e19bf37e] Running
	I0213 23:04:08.923275   74928 system_pods.go:89] "snapshot-controller-58dbcc7b99-8gqrl" [64679169-f689-4c5d-b89e-dbb3e7095f26] Running
	I0213 23:04:08.923285   74928 system_pods.go:89] "snapshot-controller-58dbcc7b99-k45hw" [7b5af714-4b23-43e2-9b68-0a03e729a563] Running
	I0213 23:04:08.923293   74928 system_pods.go:89] "storage-provisioner" [32af8c0c-00a1-4f92-afdb-fffc86fe3219] Running
	I0213 23:04:08.923303   74928 system_pods.go:89] "tiller-deploy-7b677967b9-hd46l" [fe48a6c2-5ee5-4f2e-afc7-bdf16742cbfe] Running
	I0213 23:04:08.923314   74928 system_pods.go:126] duration metric: took 7.865029ms to wait for k8s-apps to be running ...
	I0213 23:04:08.923327   74928 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:04:08.923416   74928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:04:08.934795   74928 system_svc.go:56] duration metric: took 11.45976ms WaitForService to wait for kubelet.
	I0213 23:04:08.934822   74928 kubeadm.go:581] duration metric: took 1m45.370680443s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:04:08.934844   74928 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:04:08.937589   74928 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0213 23:04:08.937618   74928 node_conditions.go:123] node cpu capacity is 8
	I0213 23:04:08.937629   74928 node_conditions.go:105] duration metric: took 2.781014ms to run NodePressure ...
	I0213 23:04:08.937641   74928 start.go:228] waiting for startup goroutines ...
	I0213 23:04:08.937647   74928 start.go:233] waiting for cluster config update ...
	I0213 23:04:08.937662   74928 start.go:242] writing updated cluster config ...
	I0213 23:04:08.937959   74928 ssh_runner.go:195] Run: rm -f paused
	I0213 23:04:08.986987   74928 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:04:08.989418   74928 out.go:177] * Done! kubectl is now configured to use "addons-913502" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.328263310Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=a59548cc-42a0-49ef-8bbe-7a8caed4df5c name=/runtime.v1.ImageService/PullImage
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.329098705Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=0cead669-f2dd-4d1f-85cd-38bcd0e0c5c8 name=/runtime.v1.ImageService/ImageStatus
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.330330922Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=0cead669-f2dd-4d1f-85cd-38bcd0e0c5c8 name=/runtime.v1.ImageService/ImageStatus
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.331186877Z" level=info msg="Creating container: default/hello-world-app-5d77478584-q55k8/hello-world-app" id=1cda602c-c67e-479a-8130-b081f087fbf4 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.331285061Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.381610857Z" level=info msg="Created container 0844f11527e57a4d41dd7e530c599f0d746590bda0cba4204d1f9b7730499562: default/hello-world-app-5d77478584-q55k8/hello-world-app" id=1cda602c-c67e-479a-8130-b081f087fbf4 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.382260728Z" level=info msg="Starting container: 0844f11527e57a4d41dd7e530c599f0d746590bda0cba4204d1f9b7730499562" id=772eb565-6db0-4a53-bc11-07157eed993c name=/runtime.v1.RuntimeService/StartContainer
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.389452019Z" level=info msg="Started container" PID=10913 containerID=0844f11527e57a4d41dd7e530c599f0d746590bda0cba4204d1f9b7730499562 description=default/hello-world-app-5d77478584-q55k8/hello-world-app id=772eb565-6db0-4a53-bc11-07157eed993c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dd7bb311ccf265e922e2314a30736bc7310d7af34b9382021487be5ad03a406
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.922200033Z" level=info msg="Removing container: f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066" id=5a3d375f-4452-47e8-b22b-671c9f35a804 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 13 23:06:49 addons-913502 crio[948]: time="2024-02-13 23:06:49.939015243Z" level=info msg="Removed container f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=5a3d375f-4452-47e8-b22b-671c9f35a804 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 13 23:06:51 addons-913502 crio[948]: time="2024-02-13 23:06:51.486648219Z" level=info msg="Stopping container: a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2 (timeout: 2s)" id=ece95574-7b2a-4af7-a0f7-22841695ee13 name=/runtime.v1.RuntimeService/StopContainer
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.492232795Z" level=warning msg="Stopping container a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ece95574-7b2a-4af7-a0f7-22841695ee13 name=/runtime.v1.RuntimeService/StopContainer
	Feb 13 23:06:53 addons-913502 conmon[6196]: conmon a7dd59c495e2f1fac800 <ninfo>: container 6208 exited with status 137
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.624460635Z" level=info msg="Stopped container a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2: ingress-nginx/ingress-nginx-controller-69cff4fd79-blkwl/controller" id=ece95574-7b2a-4af7-a0f7-22841695ee13 name=/runtime.v1.RuntimeService/StopContainer
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.624980489Z" level=info msg="Stopping pod sandbox: 0c903e671316fc05e2d42189304ff05aec225d93d99f132077b88000aa72c70c" id=24a657c2-3dee-4ed1-b286-7e0965707daa name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.628052725Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-NXLUJ6JIGZYGBJMG - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-65SHGNOS7R6GYCCA - [0:0]\n-X KUBE-HP-NXLUJ6JIGZYGBJMG\n-X KUBE-HP-65SHGNOS7R6GYCCA\nCOMMIT\n"
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.629450509Z" level=info msg="Closing host port tcp:80"
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.629497752Z" level=info msg="Closing host port tcp:443"
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.630958263Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.630980234Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.631116121Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-blkwl Namespace:ingress-nginx ID:0c903e671316fc05e2d42189304ff05aec225d93d99f132077b88000aa72c70c UID:5185a108-4307-4b54-a0fe-4895307c6b78 NetNS:/var/run/netns/31584504-0f21-4747-8510-8ec5e6ec1cd1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.631229829Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-blkwl from CNI network \"kindnet\" (type=ptp)"
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.665832084Z" level=info msg="Stopped pod sandbox: 0c903e671316fc05e2d42189304ff05aec225d93d99f132077b88000aa72c70c" id=24a657c2-3dee-4ed1-b286-7e0965707daa name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.934981391Z" level=info msg="Removing container: a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2" id=013344e5-4a30-426d-8462-0f5ae53d919e name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 13 23:06:53 addons-913502 crio[948]: time="2024-02-13 23:06:53.950398715Z" level=info msg="Removed container a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2: ingress-nginx/ingress-nginx-controller-69cff4fd79-blkwl/controller" id=013344e5-4a30-426d-8462-0f5ae53d919e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0844f11527e57       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      9 seconds ago       Running             hello-world-app           0                   2dd7bb311ccf2       hello-world-app-5d77478584-q55k8
	095cfc48c0d80       docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027                              2 minutes ago       Running             nginx                     0                   18f1ca5bfe868       nginx
	32909e02df9ec       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   75f2d2119d62e       headlamp-7ddfbb94ff-gfh4d
	a53887e924cf1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   14194a907aeeb       gcp-auth-d4c87556c-5lpdw
	f6b040ce18e80       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   cccf7a8b2e9d9       ingress-nginx-admission-patch-52sbr
	c02134124c971       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   f4d0e632b4f12       ingress-nginx-admission-create-hrb59
	1042f95db46f4       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   c6d7191ac8f67       yakd-dashboard-9947fc6bf-rg866
	abbb6c7e78ab4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   d19bcbe2a3aea       local-path-provisioner-78b46b4d5c-dnjvv
	dfc72942996f7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   64387d8b4ea34       coredns-5dd5756b68-kw9vb
	33a1dd0fd8243       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   1e0837aee4791       storage-provisioner
	2c7ce000c4787       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   47f612c7d026e       kindnet-x9mvr
	a532fb0a6d4f4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   91a5992b2daca       kube-proxy-dd5xd
	01eb55b382020       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   3dfeeb690a198       etcd-addons-913502
	926999f79520b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   fa97521f32099       kube-controller-manager-addons-913502
	65dbb4b3290b8       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   d086b3c8e8f26       kube-scheduler-addons-913502
	8147d14fedd8c       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   963f2a2426be9       kube-apiserver-addons-913502
	
	
	==> coredns [dfc72942996f7a7b0ebf8ea4591e12da82229d30e2590234eb4f019274e9fa03] <==
	[INFO] 10.244.0.5:48935 - 1154 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000919s
	[INFO] 10.244.0.5:47036 - 59019 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003878437s
	[INFO] 10.244.0.5:47036 - 63638 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005672908s
	[INFO] 10.244.0.5:60150 - 17903 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005464711s
	[INFO] 10.244.0.5:60150 - 31722 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005760444s
	[INFO] 10.244.0.5:35471 - 18041 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004296577s
	[INFO] 10.244.0.5:35471 - 4986 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006785779s
	[INFO] 10.244.0.5:36615 - 8733 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068443s
	[INFO] 10.244.0.5:36615 - 6171 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110002s
	[INFO] 10.244.0.20:41713 - 24232 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000193934s
	[INFO] 10.244.0.20:55364 - 47880 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00023569s
	[INFO] 10.244.0.20:48787 - 7014 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103692s
	[INFO] 10.244.0.20:50377 - 18280 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000207898s
	[INFO] 10.244.0.20:37006 - 24365 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117348s
	[INFO] 10.244.0.20:34953 - 9016 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118315s
	[INFO] 10.244.0.20:36408 - 49095 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007237599s
	[INFO] 10.244.0.20:45186 - 5417 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007626588s
	[INFO] 10.244.0.20:48534 - 53281 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006939279s
	[INFO] 10.244.0.20:48536 - 61531 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007955487s
	[INFO] 10.244.0.20:56357 - 62279 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005494063s
	[INFO] 10.244.0.20:53840 - 12553 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005663663s
	[INFO] 10.244.0.20:53775 - 54501 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00094504s
	[INFO] 10.244.0.20:41867 - 39431 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000975881s
	[INFO] 10.244.0.24:33790 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000197173s
	[INFO] 10.244.0.24:52628 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000131795s
	
	
	==> describe nodes <==
	Name:               addons-913502
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-913502
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802
	                    minikube.k8s.io/name=addons-913502
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_02_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-913502
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:02:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-913502
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:06:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:05:12 +0000   Tue, 13 Feb 2024 23:02:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:05:12 +0000   Tue, 13 Feb 2024 23:02:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:05:12 +0000   Tue, 13 Feb 2024 23:02:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:05:12 +0000   Tue, 13 Feb 2024 23:02:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-913502
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99ec04bc261446e2976859579fc20c50
	  System UUID:                1219682b-0a1a-42f3-8732-01d5f51f0db6
	  Boot ID:                    997a1092-3efa-483b-88f8-21b3b3d49d89
	  Kernel Version:             5.15.0-1051-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-q55k8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-d4c87556c-5lpdw                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  headlamp                    headlamp-7ddfbb94ff-gfh4d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-5dd5756b68-kw9vb                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m35s
	  kube-system                 etcd-addons-913502                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kindnet-x9mvr                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m36s
	  kube-system                 kube-apiserver-addons-913502               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-addons-913502      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-dd5xd                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-scheduler-addons-913502               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  local-path-storage          local-path-provisioner-78b46b4d5c-dnjvv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-rg866             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m30s                  kube-proxy       
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node addons-913502 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node addons-913502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x8 over 4m55s)  kubelet          Node addons-913502 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s                  kubelet          Node addons-913502 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s                  kubelet          Node addons-913502 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s                  kubelet          Node addons-913502 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m37s                  node-controller  Node addons-913502 event: Registered Node addons-913502 in Controller
	  Normal  NodeReady                4m1s                   kubelet          Node addons-913502 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007821] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003777] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000809] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000674] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000695] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000834] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000854] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000008] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001547] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.823537] kauditd_printk_skb: 36 callbacks suppressed
	[Feb13 23:04] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ba 53 96 50 8d 56 a2 88 5c c9 69 2e 08 00
	[  +1.002732] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ba 53 96 50 8d 56 a2 88 5c c9 69 2e 08 00
	[  +2.019801] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ba 53 96 50 8d 56 a2 88 5c c9 69 2e 08 00
	[  +4.027622] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ba 53 96 50 8d 56 a2 88 5c c9 69 2e 08 00
	[  +8.191203] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba 53 96 50 8d 56 a2 88 5c c9 69 2e 08 00
	[Feb13 23:05] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba 53 96 50 8d 56 a2 88 5c c9 69 2e 08 00
	[ +33.276812] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ba 53 96 50 8d 56 a2 88 5c c9 69 2e 08 00
	
	
	==> etcd [01eb55b3820206495d80eef7ea79010ec8edebedb21356a351659d7ffe72b775] <==
	{"level":"info","ts":"2024-02-13T23:02:04.485939Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-13T23:02:04.485999Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-13T23:02:04.486067Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-13T23:02:04.486102Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-13T23:02:05.068363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T23:02:05.068439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T23:02:05.068457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-13T23:02:05.068482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:02:05.06849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-13T23:02:05.068502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:02:05.068512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-13T23:02:05.069532Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:02:05.070208Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:02:05.070211Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-913502 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:02:05.070265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:02:05.070439Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:02:05.070462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:02:05.070606Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:02:05.070691Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:02:05.07075Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:02:05.071444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:02:05.071582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-13T23:02:25.874217Z","caller":"traceutil/trace.go:171","msg":"trace[772959111] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"194.823819ms","start":"2024-02-13T23:02:25.679362Z","end":"2024-02-13T23:02:25.874186Z","steps":["trace[772959111] 'process raft request'  (duration: 191.406014ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T23:02:25.874886Z","caller":"traceutil/trace.go:171","msg":"trace[1698636606] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"195.106121ms","start":"2024-02-13T23:02:25.679765Z","end":"2024-02-13T23:02:25.874871Z","steps":["trace[1698636606] 'process raft request'  (duration: 194.332479ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T23:02:26.766289Z","caller":"traceutil/trace.go:171","msg":"trace[326315272] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"100.382619ms","start":"2024-02-13T23:02:26.665875Z","end":"2024-02-13T23:02:26.766257Z","steps":["trace[326315272] 'process raft request'  (duration: 14.829789ms)","trace[326315272] 'compare'  (duration: 85.456877ms)"],"step_count":2}
	
	
	==> gcp-auth [a53887e924cf1cf348955dda27f4e348f8991ff3c1507b2306bd849cd4c42bc2] <==
	2024/02/13 23:03:38 GCP Auth Webhook started!
	2024/02/13 23:04:10 Ready to marshal response ...
	2024/02/13 23:04:10 Ready to write response ...
	2024/02/13 23:04:10 Ready to marshal response ...
	2024/02/13 23:04:10 Ready to write response ...
	2024/02/13 23:04:10 Ready to marshal response ...
	2024/02/13 23:04:10 Ready to write response ...
	2024/02/13 23:04:14 Ready to marshal response ...
	2024/02/13 23:04:14 Ready to write response ...
	2024/02/13 23:04:20 Ready to marshal response ...
	2024/02/13 23:04:20 Ready to write response ...
	2024/02/13 23:04:21 Ready to marshal response ...
	2024/02/13 23:04:21 Ready to write response ...
	2024/02/13 23:04:25 Ready to marshal response ...
	2024/02/13 23:04:25 Ready to write response ...
	2024/02/13 23:04:33 Ready to marshal response ...
	2024/02/13 23:04:33 Ready to write response ...
	2024/02/13 23:04:33 Ready to marshal response ...
	2024/02/13 23:04:33 Ready to write response ...
	2024/02/13 23:04:42 Ready to marshal response ...
	2024/02/13 23:04:42 Ready to write response ...
	2024/02/13 23:04:49 Ready to marshal response ...
	2024/02/13 23:04:49 Ready to write response ...
	2024/02/13 23:06:48 Ready to marshal response ...
	2024/02/13 23:06:48 Ready to write response ...
	
	
	==> kernel <==
	 23:06:58 up  1:49,  0 users,  load average: 0.51, 1.10, 1.48
	Linux addons-913502 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [2c7ce000c4787cd59b99f0aa3712df814630b5df86dc8d4106851ef9fb5e528f] <==
	I0213 23:04:56.917739       1 main.go:227] handling current node
	I0213 23:05:06.921200       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:05:06.921225       1 main.go:227] handling current node
	I0213 23:05:16.925410       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:05:16.925441       1 main.go:227] handling current node
	I0213 23:05:26.935717       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:05:26.935748       1 main.go:227] handling current node
	I0213 23:05:36.939996       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:05:36.940020       1 main.go:227] handling current node
	I0213 23:05:46.951609       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:05:46.951633       1 main.go:227] handling current node
	I0213 23:05:56.955861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:05:56.955887       1 main.go:227] handling current node
	I0213 23:06:06.967699       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:06:06.967723       1 main.go:227] handling current node
	I0213 23:06:16.972263       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:06:16.972287       1 main.go:227] handling current node
	I0213 23:06:26.983963       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:06:26.983997       1 main.go:227] handling current node
	I0213 23:06:36.995583       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:06:36.995607       1 main.go:227] handling current node
	I0213 23:06:46.999812       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:06:46.999836       1 main.go:227] handling current node
	I0213 23:06:57.011722       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:06:57.011753       1 main.go:227] handling current node
	
	
	==> kube-apiserver [8147d14fedd8c6540430e8abd97dd62cc925f2af68896a21c6414005e96c1b27] <==
	I0213 23:04:28.010079       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0213 23:04:29.019572       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0213 23:04:35.482440       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0213 23:05:03.925911       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.925976       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 23:05:03.936257       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.936338       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 23:05:03.940593       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.940642       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 23:05:03.942699       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.942806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 23:05:03.950006       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.950055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 23:05:03.955819       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.955847       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 23:05:03.969504       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.969631       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 23:05:03.969848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 23:05:03.969950       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0213 23:05:04.943121       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0213 23:05:04.970261       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0213 23:05:04.980607       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0213 23:05:05.912376       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0213 23:06:48.319048       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.251.180"}
	E0213 23:06:50.510426       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [926999f79520bbbaa76881e1d82a66d6c9dbff9b40626be7aec26bba143ebef4] <==
	E0213 23:05:39.757091       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 23:05:46.124281       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 23:05:46.124313       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 23:06:00.898382       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 23:06:00.898432       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 23:06:11.724140       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 23:06:11.724175       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 23:06:12.479072       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 23:06:12.479104       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 23:06:19.234743       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 23:06:19.234776       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 23:06:32.217693       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 23:06:32.217728       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 23:06:48.161400       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0213 23:06:48.172558       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-q55k8"
	I0213 23:06:48.178223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.994356ms"
	I0213 23:06:48.182364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.086572ms"
	I0213 23:06:48.182465       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.944µs"
	I0213 23:06:48.182562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.63µs"
	I0213 23:06:48.188523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.007µs"
	I0213 23:06:49.940268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.59241ms"
	I0213 23:06:49.940376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.828µs"
	I0213 23:06:50.474371       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0213 23:06:50.475957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="9.596µs"
	I0213 23:06:50.478765       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [a532fb0a6d4f40f6d4e96acc87a0c7a609cd1530b100bc6bdb2954f614d7ce5f] <==
	I0213 23:02:26.873144       1 server_others.go:69] "Using iptables proxy"
	I0213 23:02:27.263323       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0213 23:02:27.971745       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0213 23:02:27.986949       1 server_others.go:152] "Using iptables Proxier"
	I0213 23:02:27.987068       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0213 23:02:27.987084       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0213 23:02:27.987139       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:02:27.987684       1 server.go:846] "Version info" version="v1.28.4"
	I0213 23:02:27.987722       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:02:28.060854       1 config.go:188] "Starting service config controller"
	I0213 23:02:28.061359       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:02:28.060885       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:02:28.061810       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:02:28.061051       1 config.go:315] "Starting node config controller"
	I0213 23:02:28.063460       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:02:28.163116       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:02:28.163195       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:02:28.163588       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [65dbb4b3290b81d79b9ed0f3ebb32d5c3931cea62c5ca1a83a3e5ced0af68aa6] <==
	W0213 23:02:06.589630       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 23:02:06.589802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0213 23:02:06.589809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 23:02:06.589824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0213 23:02:06.589769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:02:06.589849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:02:06.590431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:02:06.590458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 23:02:06.590437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:02:06.590477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 23:02:06.590809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:02:06.590834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:02:06.591277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:02:06.591297       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:02:06.591314       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:02:06.591311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:02:07.502607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 23:02:07.502647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0213 23:02:07.502615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:02:07.502670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 23:02:07.543261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0213 23:02:07.543288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0213 23:02:07.669380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 23:02:07.669416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0213 23:02:07.879015       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 13 23:06:48 addons-913502 kubelet[1554]: I0213 23:06:48.288573    1554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc7lb\" (UniqueName: \"kubernetes.io/projected/b284e710-34e5-467f-9e34-bc1965295747-kube-api-access-lc7lb\") pod \"hello-world-app-5d77478584-q55k8\" (UID: \"b284e710-34e5-467f-9e34-bc1965295747\") " pod="default/hello-world-app-5d77478584-q55k8"
	Feb 13 23:06:48 addons-913502 kubelet[1554]: I0213 23:06:48.288667    1554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b284e710-34e5-467f-9e34-bc1965295747-gcp-creds\") pod \"hello-world-app-5d77478584-q55k8\" (UID: \"b284e710-34e5-467f-9e34-bc1965295747\") " pod="default/hello-world-app-5d77478584-q55k8"
	Feb 13 23:06:48 addons-913502 kubelet[1554]: W0213 23:06:48.573658    1554 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3a3c4bea7929182449776c05fa455c4211c81c7e833202acb79be3ab764f9ccb/crio-2dd7bb311ccf265e922e2314a30736bc7310d7af34b9382021487be5ad03a406 WatchSource:0}: Error finding container 2dd7bb311ccf265e922e2314a30736bc7310d7af34b9382021487be5ad03a406: Status 404 returned error can't find the container with id 2dd7bb311ccf265e922e2314a30736bc7310d7af34b9382021487be5ad03a406
	Feb 13 23:06:49 addons-913502 kubelet[1554]: I0213 23:06:49.497546    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnjsb\" (UniqueName: \"kubernetes.io/projected/463f7dd2-ee46-408c-8790-4a7318e64279-kube-api-access-tnjsb\") pod \"463f7dd2-ee46-408c-8790-4a7318e64279\" (UID: \"463f7dd2-ee46-408c-8790-4a7318e64279\") "
	Feb 13 23:06:49 addons-913502 kubelet[1554]: I0213 23:06:49.499441    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463f7dd2-ee46-408c-8790-4a7318e64279-kube-api-access-tnjsb" (OuterVolumeSpecName: "kube-api-access-tnjsb") pod "463f7dd2-ee46-408c-8790-4a7318e64279" (UID: "463f7dd2-ee46-408c-8790-4a7318e64279"). InnerVolumeSpecName "kube-api-access-tnjsb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 23:06:49 addons-913502 kubelet[1554]: I0213 23:06:49.598651    1554 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tnjsb\" (UniqueName: \"kubernetes.io/projected/463f7dd2-ee46-408c-8790-4a7318e64279-kube-api-access-tnjsb\") on node \"addons-913502\" DevicePath \"\""
	Feb 13 23:06:49 addons-913502 kubelet[1554]: I0213 23:06:49.921226    1554 scope.go:117] "RemoveContainer" containerID="f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066"
	Feb 13 23:06:49 addons-913502 kubelet[1554]: I0213 23:06:49.939302    1554 scope.go:117] "RemoveContainer" containerID="f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066"
	Feb 13 23:06:49 addons-913502 kubelet[1554]: E0213 23:06:49.939752    1554 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066\": container with ID starting with f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066 not found: ID does not exist" containerID="f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066"
	Feb 13 23:06:49 addons-913502 kubelet[1554]: I0213 23:06:49.939820    1554 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066"} err="failed to get container status \"f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066\": rpc error: code = NotFound desc = could not find container \"f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066\": container with ID starting with f462d4866b127b659d396a233d73064ccb1267ca980a3693e0617ab7cf2ed066 not found: ID does not exist"
	Feb 13 23:06:49 addons-913502 kubelet[1554]: I0213 23:06:49.944244    1554 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-q55k8" podStartSLOduration=1.192168864 podCreationTimestamp="2024-02-13 23:06:48 +0000 UTC" firstStartedPulling="2024-02-13 23:06:48.576542954 +0000 UTC m=+279.161483063" lastFinishedPulling="2024-02-13 23:06:49.328550653 +0000 UTC m=+279.913490771" observedRunningTime="2024-02-13 23:06:49.93340965 +0000 UTC m=+280.518349792" watchObservedRunningTime="2024-02-13 23:06:49.944176572 +0000 UTC m=+280.529116726"
	Feb 13 23:06:51 addons-913502 kubelet[1554]: I0213 23:06:51.501668    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="463f7dd2-ee46-408c-8790-4a7318e64279" path="/var/lib/kubelet/pods/463f7dd2-ee46-408c-8790-4a7318e64279/volumes"
	Feb 13 23:06:51 addons-913502 kubelet[1554]: I0213 23:06:51.502028    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="475e3344-264d-46da-9f01-658ac088fddb" path="/var/lib/kubelet/pods/475e3344-264d-46da-9f01-658ac088fddb/volumes"
	Feb 13 23:06:51 addons-913502 kubelet[1554]: I0213 23:06:51.502342    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a6dde495-907f-4233-b3e9-f95c6bf230cc" path="/var/lib/kubelet/pods/a6dde495-907f-4233-b3e9-f95c6bf230cc/volumes"
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.825521    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5185a108-4307-4b54-a0fe-4895307c6b78-webhook-cert\") pod \"5185a108-4307-4b54-a0fe-4895307c6b78\" (UID: \"5185a108-4307-4b54-a0fe-4895307c6b78\") "
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.825584    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmnl5\" (UniqueName: \"kubernetes.io/projected/5185a108-4307-4b54-a0fe-4895307c6b78-kube-api-access-mmnl5\") pod \"5185a108-4307-4b54-a0fe-4895307c6b78\" (UID: \"5185a108-4307-4b54-a0fe-4895307c6b78\") "
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.827554    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5185a108-4307-4b54-a0fe-4895307c6b78-kube-api-access-mmnl5" (OuterVolumeSpecName: "kube-api-access-mmnl5") pod "5185a108-4307-4b54-a0fe-4895307c6b78" (UID: "5185a108-4307-4b54-a0fe-4895307c6b78"). InnerVolumeSpecName "kube-api-access-mmnl5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.827712    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5185a108-4307-4b54-a0fe-4895307c6b78-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5185a108-4307-4b54-a0fe-4895307c6b78" (UID: "5185a108-4307-4b54-a0fe-4895307c6b78"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.926058    1554 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mmnl5\" (UniqueName: \"kubernetes.io/projected/5185a108-4307-4b54-a0fe-4895307c6b78-kube-api-access-mmnl5\") on node \"addons-913502\" DevicePath \"\""
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.926099    1554 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5185a108-4307-4b54-a0fe-4895307c6b78-webhook-cert\") on node \"addons-913502\" DevicePath \"\""
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.933831    1554 scope.go:117] "RemoveContainer" containerID="a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2"
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.950661    1554 scope.go:117] "RemoveContainer" containerID="a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2"
	Feb 13 23:06:53 addons-913502 kubelet[1554]: E0213 23:06:53.951059    1554 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2\": container with ID starting with a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2 not found: ID does not exist" containerID="a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2"
	Feb 13 23:06:53 addons-913502 kubelet[1554]: I0213 23:06:53.951111    1554 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2"} err="failed to get container status \"a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2\": rpc error: code = NotFound desc = could not find container \"a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2\": container with ID starting with a7dd59c495e2f1fac800152c02cb43564fa7e66c5b3f65d5ef1951cee9176dc2 not found: ID does not exist"
	Feb 13 23:06:55 addons-913502 kubelet[1554]: I0213 23:06:55.501317    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5185a108-4307-4b54-a0fe-4895307c6b78" path="/var/lib/kubelet/pods/5185a108-4307-4b54-a0fe-4895307c6b78/volumes"
	
	
	==> storage-provisioner [33a1dd0fd8243359d3bd9ca651a1228961de2f37c1f4b4568bb28a94b0c6b511] <==
	I0213 23:02:58.173397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:02:58.184949       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:02:58.185014       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:02:58.192648       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:02:58.192875       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-913502_b503f7ed-30f9-41fc-a132-85a9d16dcbdd!
	I0213 23:02:58.193029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b73fd4ce-47e4-421a-9ef0-bac8cdb58009", APIVersion:"v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-913502_b503f7ed-30f9-41fc-a132-85a9d16dcbdd became leader
	I0213 23:02:58.293351       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-913502_b503f7ed-30f9-41fc-a132-85a9d16dcbdd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-913502 -n addons-913502
helpers_test.go:261: (dbg) Run:  kubectl --context addons-913502 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-879196
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image load --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 image load --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr: (6.809259033s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 image ls: (2.395856593s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-879196" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-660356 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-660356 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.12742392s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-660356 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-660356 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [41132b8c-180b-4ac5-8e6a-83e7e39d7106] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [41132b8c-180b-4ac5-8e6a-83e7e39d7106] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003588478s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-660356 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0213 23:14:09.008143   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:14:36.693522   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-660356 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.324976264s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-660356 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-660356 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0213 23:15:39.224724   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:39.229985   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:39.240240   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:39.260661   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:39.300934   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:39.381310   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:39.541707   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:39.862280   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:40.503211   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.015864076s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-660356 addons disable ingress-dns --alsologtostderr -v=1
E0213 23:15:41.783748   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-660356 addons disable ingress-dns --alsologtostderr -v=1: (2.164475464s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-660356 addons disable ingress --alsologtostderr -v=1
E0213 23:15:44.344541   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:15:49.465406   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-660356 addons disable ingress --alsologtostderr -v=1: (7.430759255s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-660356
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-660356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "613fc374591f8dceed0dd6bbbbe1ea1ac5682275d59aaa64bb28a6f53a35e0e9",
	        "Created": "2024-02-13T23:11:39.103238072Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 114247,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-13T23:11:39.362282076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9941de2e064a4a6a7155bfc66cedd2854b8c725b77bb8d4eaf81bef39f951dd7",
	        "ResolvConfPath": "/var/lib/docker/containers/613fc374591f8dceed0dd6bbbbe1ea1ac5682275d59aaa64bb28a6f53a35e0e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/613fc374591f8dceed0dd6bbbbe1ea1ac5682275d59aaa64bb28a6f53a35e0e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/613fc374591f8dceed0dd6bbbbe1ea1ac5682275d59aaa64bb28a6f53a35e0e9/hosts",
	        "LogPath": "/var/lib/docker/containers/613fc374591f8dceed0dd6bbbbe1ea1ac5682275d59aaa64bb28a6f53a35e0e9/613fc374591f8dceed0dd6bbbbe1ea1ac5682275d59aaa64bb28a6f53a35e0e9-json.log",
	        "Name": "/ingress-addon-legacy-660356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-660356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-660356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cb2656ca084fc40f56eb29e2f94ec2376c9e60bbb0488233a1860c01d0ddee05-init/diff:/var/lib/docker/overlay2/4fe14e78c622f13dfc4094e03ac245950865fc60884691f5477756f62ef198c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb2656ca084fc40f56eb29e2f94ec2376c9e60bbb0488233a1860c01d0ddee05/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb2656ca084fc40f56eb29e2f94ec2376c9e60bbb0488233a1860c01d0ddee05/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb2656ca084fc40f56eb29e2f94ec2376c9e60bbb0488233a1860c01d0ddee05/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-660356",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-660356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-660356",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-660356",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-660356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d21ec277479164b5079d8dc4badab43ce8c1a86695657a59ad3d1dedbd6ae83e",
	            "SandboxKey": "/var/run/docker/netns/d21ec2774791",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-660356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "613fc374591f",
	                        "ingress-addon-legacy-660356"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "6d3302400eecb8d59f179681543dac9f17e247e9df5a8c3d5d7a63e83d8fa332",
	                    "EndpointID": "cf77d7e09442302114b334f1cf358e8d85376c45c62c919dfe68eaeb5f501171",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-660356",
	                        "613fc374591f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-660356 -n ingress-addon-legacy-660356
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-660356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-660356 logs -n 25: (1.069022098s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-879196 image rm                                                   | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-879196                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-879196 image ls                                                   | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	| image          | functional-879196 image load                                                 | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-879196 image ls                                                   | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	| image          | functional-879196 image save --daemon                                        | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-879196                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                           | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | -p functional-879196                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| service        | functional-879196 service                                                    | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | hello-node-connect --url                                                     |                             |         |         |                     |                     |
	| update-context | functional-879196                                                            | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-879196                                                            | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-879196                                                            | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-879196                                                            | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-879196                                                            | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-879196 ssh pgrep                                                  | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-879196 image build -t                                             | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | localhost/my-image:functional-879196                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-879196                                                            | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-879196                                                            | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-879196 image ls                                                   | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	| delete         | -p functional-879196                                                         | functional-879196           | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:11 UTC |
	| start          | -p ingress-addon-legacy-660356                                               | ingress-addon-legacy-660356 | jenkins | v1.32.0 | 13 Feb 24 23:11 UTC | 13 Feb 24 23:12 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-660356                                                  | ingress-addon-legacy-660356 | jenkins | v1.32.0 | 13 Feb 24 23:12 UTC | 13 Feb 24 23:12 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-660356                                                  | ingress-addon-legacy-660356 | jenkins | v1.32.0 | 13 Feb 24 23:12 UTC | 13 Feb 24 23:12 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-660356                                                  | ingress-addon-legacy-660356 | jenkins | v1.32.0 | 13 Feb 24 23:13 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-660356 ip                                               | ingress-addon-legacy-660356 | jenkins | v1.32.0 | 13 Feb 24 23:15 UTC | 13 Feb 24 23:15 UTC |
	| addons         | ingress-addon-legacy-660356                                                  | ingress-addon-legacy-660356 | jenkins | v1.32.0 | 13 Feb 24 23:15 UTC | 13 Feb 24 23:15 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-660356                                                  | ingress-addon-legacy-660356 | jenkins | v1.32.0 | 13 Feb 24 23:15 UTC | 13 Feb 24 23:15 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:11:27
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:11:27.488819  113618 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:11:27.488968  113618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:11:27.488981  113618 out.go:304] Setting ErrFile to fd 2...
	I0213 23:11:27.488989  113618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:11:27.489226  113618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:11:27.489884  113618 out.go:298] Setting JSON to false
	I0213 23:11:27.490922  113618 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6835,"bootTime":1707859053,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:11:27.490991  113618 start.go:138] virtualization: kvm guest
	I0213 23:11:27.493490  113618 out.go:177] * [ingress-addon-legacy-660356] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:11:27.495324  113618 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 23:11:27.497046  113618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:11:27.495346  113618 notify.go:220] Checking for updates...
	I0213 23:11:27.498859  113618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:11:27.500554  113618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:11:27.502389  113618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:11:27.503853  113618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:11:27.505706  113618 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:11:27.528916  113618 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:11:27.529041  113618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:11:27.583861  113618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2024-02-13 23:11:27.574807113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:11:27.584069  113618 docker.go:295] overlay module found
	I0213 23:11:27.586509  113618 out.go:177] * Using the docker driver based on user configuration
	I0213 23:11:27.588359  113618 start.go:298] selected driver: docker
	I0213 23:11:27.588382  113618 start.go:902] validating driver "docker" against <nil>
	I0213 23:11:27.588398  113618 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:11:27.589189  113618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:11:27.639308  113618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2024-02-13 23:11:27.630772869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:11:27.639508  113618 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 23:11:27.639722  113618 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:11:27.642115  113618 out.go:177] * Using Docker driver with root privileges
	I0213 23:11:27.644148  113618 cni.go:84] Creating CNI manager for ""
	I0213 23:11:27.644181  113618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:11:27.644193  113618 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0213 23:11:27.644206  113618 start_flags.go:321] config:
	{Name:ingress-addon-legacy-660356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-660356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:11:27.646014  113618 out.go:177] * Starting control plane node ingress-addon-legacy-660356 in cluster ingress-addon-legacy-660356
	I0213 23:11:27.647600  113618 cache.go:121] Beginning downloading kic base image for docker with crio
	I0213 23:11:27.649154  113618 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0213 23:11:27.650700  113618 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0213 23:11:27.650789  113618 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 23:11:27.666894  113618 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0213 23:11:27.666921  113618 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0213 23:11:27.683408  113618 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0213 23:11:27.683442  113618 cache.go:56] Caching tarball of preloaded images
	I0213 23:11:27.683579  113618 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0213 23:11:27.685492  113618 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0213 23:11:27.687079  113618 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:11:27.724799  113618 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0213 23:11:30.934653  113618 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:11:30.934762  113618 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:11:31.944709  113618 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0213 23:11:31.945059  113618 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/config.json ...
	I0213 23:11:31.945092  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/config.json: {Name:mk5c2c46f0b8622e6eaae9649dd477ea2720fffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:31.945293  113618 cache.go:194] Successfully downloaded all kic artifacts
	I0213 23:11:31.945318  113618 start.go:365] acquiring machines lock for ingress-addon-legacy-660356: {Name:mk1c4c66e8fb3c867bdbe51761a68350db3a0edd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:11:31.945364  113618 start.go:369] acquired machines lock for "ingress-addon-legacy-660356" in 34.943µs
	I0213 23:11:31.945381  113618 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-660356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-660356 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:11:31.945451  113618 start.go:125] createHost starting for "" (driver="docker")
	I0213 23:11:31.948299  113618 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0213 23:11:31.948562  113618 start.go:159] libmachine.API.Create for "ingress-addon-legacy-660356" (driver="docker")
	I0213 23:11:31.948601  113618 client.go:168] LocalClient.Create starting
	I0213 23:11:31.948720  113618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem
	I0213 23:11:31.948760  113618 main.go:141] libmachine: Decoding PEM data...
	I0213 23:11:31.948780  113618 main.go:141] libmachine: Parsing certificate...
	I0213 23:11:31.948839  113618 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem
	I0213 23:11:31.948861  113618 main.go:141] libmachine: Decoding PEM data...
	I0213 23:11:31.948869  113618 main.go:141] libmachine: Parsing certificate...
	I0213 23:11:31.949167  113618 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-660356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0213 23:11:31.964784  113618 cli_runner.go:211] docker network inspect ingress-addon-legacy-660356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0213 23:11:31.964878  113618 network_create.go:281] running [docker network inspect ingress-addon-legacy-660356] to gather additional debugging logs...
	I0213 23:11:31.964903  113618 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-660356
	W0213 23:11:31.979846  113618 cli_runner.go:211] docker network inspect ingress-addon-legacy-660356 returned with exit code 1
	I0213 23:11:31.979885  113618 network_create.go:284] error running [docker network inspect ingress-addon-legacy-660356]: docker network inspect ingress-addon-legacy-660356: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-660356 not found
	I0213 23:11:31.979899  113618 network_create.go:286] output of [docker network inspect ingress-addon-legacy-660356]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-660356 not found
	
	** /stderr **
	I0213 23:11:31.980007  113618 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 23:11:31.994689  113618 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00088d920}
	I0213 23:11:31.994724  113618 network_create.go:124] attempt to create docker network ingress-addon-legacy-660356 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0213 23:11:31.994776  113618 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-660356 ingress-addon-legacy-660356
	I0213 23:11:32.044486  113618 network_create.go:108] docker network ingress-addon-legacy-660356 192.168.49.0/24 created
	I0213 23:11:32.044531  113618 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-660356" container
	I0213 23:11:32.044607  113618 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0213 23:11:32.059518  113618 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-660356 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-660356 --label created_by.minikube.sigs.k8s.io=true
	I0213 23:11:32.076495  113618 oci.go:103] Successfully created a docker volume ingress-addon-legacy-660356
	I0213 23:11:32.076591  113618 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-660356-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-660356 --entrypoint /usr/bin/test -v ingress-addon-legacy-660356:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0213 23:11:33.757839  113618 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-660356-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-660356 --entrypoint /usr/bin/test -v ingress-addon-legacy-660356:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.681190734s)
	I0213 23:11:33.757877  113618 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-660356
	I0213 23:11:33.757901  113618 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0213 23:11:33.757928  113618 kic.go:194] Starting extracting preloaded images to volume ...
	I0213 23:11:33.757993  113618 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-660356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0213 23:11:39.037647  113618 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-660356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.279601788s)
	I0213 23:11:39.037685  113618 kic.go:203] duration metric: took 5.279755 seconds to extract preloaded images to volume
	W0213 23:11:39.037812  113618 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0213 23:11:39.037898  113618 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0213 23:11:39.089076  113618 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-660356 --name ingress-addon-legacy-660356 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-660356 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-660356 --network ingress-addon-legacy-660356 --ip 192.168.49.2 --volume ingress-addon-legacy-660356:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0213 23:11:39.370195  113618 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-660356 --format={{.State.Running}}
	I0213 23:11:39.386824  113618 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-660356 --format={{.State.Status}}
	I0213 23:11:39.405339  113618 cli_runner.go:164] Run: docker exec ingress-addon-legacy-660356 stat /var/lib/dpkg/alternatives/iptables
	I0213 23:11:39.446303  113618 oci.go:144] the created container "ingress-addon-legacy-660356" has a running status.
	I0213 23:11:39.446343  113618 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa...
	I0213 23:11:39.534032  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0213 23:11:39.534084  113618 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0213 23:11:39.553106  113618 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-660356 --format={{.State.Status}}
	I0213 23:11:39.568361  113618 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0213 23:11:39.568384  113618 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-660356 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0213 23:11:39.613483  113618 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-660356 --format={{.State.Status}}
	I0213 23:11:39.629710  113618 machine.go:88] provisioning docker machine ...
	I0213 23:11:39.629758  113618 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-660356"
	I0213 23:11:39.629833  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:39.647298  113618 main.go:141] libmachine: Using SSH client type: native
	I0213 23:11:39.647881  113618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0213 23:11:39.647912  113618 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-660356 && echo "ingress-addon-legacy-660356" | sudo tee /etc/hostname
	I0213 23:11:39.648666  113618 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34660->127.0.0.1:32787: read: connection reset by peer
	I0213 23:11:42.791049  113618 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-660356
	
	I0213 23:11:42.791141  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:42.807363  113618 main.go:141] libmachine: Using SSH client type: native
	I0213 23:11:42.807691  113618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0213 23:11:42.807711  113618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-660356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-660356/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-660356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:11:42.940640  113618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:11:42.940665  113618 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18169-66678/.minikube CaCertPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18169-66678/.minikube}
	I0213 23:11:42.940700  113618 ubuntu.go:177] setting up certificates
	I0213 23:11:42.940711  113618 provision.go:83] configureAuth start
	I0213 23:11:42.940777  113618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-660356
	I0213 23:11:42.956961  113618 provision.go:138] copyHostCerts
	I0213 23:11:42.957005  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18169-66678/.minikube/key.pem
	I0213 23:11:42.957043  113618 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-66678/.minikube/key.pem, removing ...
	I0213 23:11:42.957061  113618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-66678/.minikube/key.pem
	I0213 23:11:42.957136  113618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18169-66678/.minikube/key.pem (1679 bytes)
	I0213 23:11:42.957222  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18169-66678/.minikube/ca.pem
	I0213 23:11:42.957243  113618 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-66678/.minikube/ca.pem, removing ...
	I0213 23:11:42.957253  113618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.pem
	I0213 23:11:42.957291  113618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18169-66678/.minikube/ca.pem (1078 bytes)
	I0213 23:11:42.957372  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18169-66678/.minikube/cert.pem
	I0213 23:11:42.957396  113618 exec_runner.go:144] found /home/jenkins/minikube-integration/18169-66678/.minikube/cert.pem, removing ...
	I0213 23:11:42.957405  113618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18169-66678/.minikube/cert.pem
	I0213 23:11:42.957439  113618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18169-66678/.minikube/cert.pem (1123 bytes)
	I0213 23:11:42.957506  113618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-660356 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-660356]
	I0213 23:11:43.313523  113618 provision.go:172] copyRemoteCerts
	I0213 23:11:43.313588  113618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:11:43.313623  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:43.329509  113618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa Username:docker}
	I0213 23:11:43.424703  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 23:11:43.424768  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:11:43.446145  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 23:11:43.446213  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0213 23:11:43.466823  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 23:11:43.466888  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:11:43.488030  113618 provision.go:86] duration metric: configureAuth took 547.301739ms
	I0213 23:11:43.488062  113618 ubuntu.go:193] setting minikube options for container-runtime
	I0213 23:11:43.488290  113618 config.go:182] Loaded profile config "ingress-addon-legacy-660356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0213 23:11:43.488454  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:43.504369  113618 main.go:141] libmachine: Using SSH client type: native
	I0213 23:11:43.504705  113618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0213 23:11:43.504724  113618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:11:43.744351  113618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:11:43.744381  113618 machine.go:91] provisioned docker machine in 4.114645659s
	I0213 23:11:43.744394  113618 client.go:171] LocalClient.Create took 11.795783939s
	I0213 23:11:43.744416  113618 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-660356" took 11.795855461s
	I0213 23:11:43.744426  113618 start.go:300] post-start starting for "ingress-addon-legacy-660356" (driver="docker")
	I0213 23:11:43.744441  113618 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:11:43.744504  113618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:11:43.744555  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:43.760493  113618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa Username:docker}
	I0213 23:11:43.856974  113618 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:11:43.860056  113618 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0213 23:11:43.860089  113618 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0213 23:11:43.860097  113618 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0213 23:11:43.860105  113618 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0213 23:11:43.860116  113618 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-66678/.minikube/addons for local assets ...
	I0213 23:11:43.860168  113618 filesync.go:126] Scanning /home/jenkins/minikube-integration/18169-66678/.minikube/files for local assets ...
	I0213 23:11:43.860245  113618 filesync.go:149] local asset: /home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/ssl/certs/734532.pem -> 734532.pem in /etc/ssl/certs
	I0213 23:11:43.860257  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/ssl/certs/734532.pem -> /etc/ssl/certs/734532.pem
	I0213 23:11:43.860378  113618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:11:43.867849  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/ssl/certs/734532.pem --> /etc/ssl/certs/734532.pem (1708 bytes)
	I0213 23:11:43.888924  113618 start.go:303] post-start completed in 144.478979ms
	I0213 23:11:43.889298  113618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-660356
	I0213 23:11:43.904641  113618 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/config.json ...
	I0213 23:11:43.904980  113618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 23:11:43.905038  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:43.921044  113618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa Username:docker}
	I0213 23:11:44.013142  113618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0213 23:11:44.017263  113618 start.go:128] duration metric: createHost completed in 12.071796123s
	I0213 23:11:44.017287  113618 start.go:83] releasing machines lock for "ingress-addon-legacy-660356", held for 12.071913173s
	I0213 23:11:44.017352  113618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-660356
	I0213 23:11:44.033155  113618 ssh_runner.go:195] Run: cat /version.json
	I0213 23:11:44.033203  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:44.033225  113618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:11:44.033290  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:11:44.049114  113618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa Username:docker}
	I0213 23:11:44.050013  113618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa Username:docker}
	I0213 23:11:44.140045  113618 ssh_runner.go:195] Run: systemctl --version
	I0213 23:11:44.144306  113618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:11:44.281981  113618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 23:11:44.286432  113618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:11:44.304267  113618 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0213 23:11:44.304419  113618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:11:44.331464  113618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0213 23:11:44.331489  113618 start.go:475] detecting cgroup driver to use...
	I0213 23:11:44.331520  113618 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0213 23:11:44.331560  113618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:11:44.345120  113618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:11:44.354953  113618 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:11:44.355033  113618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:11:44.367111  113618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:11:44.379460  113618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:11:44.457828  113618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:11:44.538442  113618 docker.go:233] disabling docker service ...
	I0213 23:11:44.538524  113618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:11:44.555627  113618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:11:44.566051  113618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:11:44.641288  113618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:11:44.718508  113618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:11:44.728647  113618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:11:44.742764  113618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0213 23:11:44.742820  113618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:11:44.751492  113618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:11:44.751557  113618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:11:44.759981  113618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:11:44.768205  113618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:11:44.776682  113618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:11:44.784643  113618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:11:44.791931  113618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:11:44.799064  113618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:11:44.869321  113618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:11:44.983130  113618 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:11:44.983199  113618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:11:44.986570  113618 start.go:543] Will wait 60s for crictl version
	I0213 23:11:44.986625  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:44.989599  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:11:45.020305  113618 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0213 23:11:45.020426  113618 ssh_runner.go:195] Run: crio --version
	I0213 23:11:45.052998  113618 ssh_runner.go:195] Run: crio --version
	I0213 23:11:45.089291  113618 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0213 23:11:45.091054  113618 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-660356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0213 23:11:45.108167  113618 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0213 23:11:45.111788  113618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:11:45.122101  113618 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0213 23:11:45.122181  113618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:11:45.164609  113618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0213 23:11:45.164671  113618 ssh_runner.go:195] Run: which lz4
	I0213 23:11:45.167925  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0213 23:11:45.168022  113618 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:11:45.171074  113618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:11:45.171108  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0213 23:11:46.103611  113618 crio.go:444] Took 0.935622 seconds to copy over tarball
	I0213 23:11:46.103710  113618 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:11:48.323115  113618 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219361058s)
	I0213 23:11:48.323149  113618 crio.go:451] Took 2.219503 seconds to extract the tarball
	I0213 23:11:48.323158  113618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:11:48.392230  113618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:11:48.423520  113618 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0213 23:11:48.423551  113618 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:11:48.423646  113618 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0213 23:11:48.423672  113618 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0213 23:11:48.423694  113618 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 23:11:48.423627  113618 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:11:48.423635  113618 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 23:11:48.423671  113618 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 23:11:48.423676  113618 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 23:11:48.423694  113618 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0213 23:11:48.424875  113618 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0213 23:11:48.424899  113618 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 23:11:48.424925  113618 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0213 23:11:48.424957  113618 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 23:11:48.424982  113618 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 23:11:48.425012  113618 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0213 23:11:48.425021  113618 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 23:11:48.424926  113618 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:11:48.593105  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 23:11:48.608545  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0213 23:11:48.617222  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0213 23:11:48.624371  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0213 23:11:48.624419  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:11:48.630648  113618 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0213 23:11:48.630690  113618 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 23:11:48.630737  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:48.649839  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0213 23:11:48.652989  113618 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0213 23:11:48.653040  113618 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 23:11:48.653073  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:48.658580  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0213 23:11:48.672483  113618 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0213 23:11:48.676388  113618 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0213 23:11:48.676445  113618 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 23:11:48.676491  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:48.679868  113618 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0213 23:11:48.679921  113618 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0213 23:11:48.679963  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:48.776495  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 23:11:48.776514  113618 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0213 23:11:48.776552  113618 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 23:11:48.776607  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:48.776609  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0213 23:11:48.776665  113618 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0213 23:11:48.776701  113618 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0213 23:11:48.776736  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:48.776755  113618 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0213 23:11:48.776786  113618 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0213 23:11:48.776817  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0213 23:11:48.776820  113618 ssh_runner.go:195] Run: which crictl
	I0213 23:11:48.780569  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0213 23:11:48.780588  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0213 23:11:48.867608  113618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0213 23:11:48.868937  113618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0213 23:11:48.868942  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0213 23:11:48.877778  113618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0213 23:11:48.877836  113618 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0213 23:11:48.879132  113618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0213 23:11:48.881778  113618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0213 23:11:48.906233  113618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0213 23:11:48.962801  113618 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0213 23:11:48.962855  113618 cache_images.go:92] LoadImages completed in 539.290916ms
	W0213 23:11:48.962938  113618 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18169-66678/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0213 23:11:48.962997  113618 ssh_runner.go:195] Run: crio config
	I0213 23:11:49.006205  113618 cni.go:84] Creating CNI manager for ""
	I0213 23:11:49.006227  113618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:11:49.006245  113618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:11:49.006277  113618 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-660356 NodeName:ingress-addon-legacy-660356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 23:11:49.006444  113618 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-660356"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:11:49.006526  113618 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-660356 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-660356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:11:49.006590  113618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0213 23:11:49.014980  113618 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:11:49.015069  113618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:11:49.023435  113618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0213 23:11:49.039139  113618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0213 23:11:49.055308  113618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0213 23:11:49.071164  113618 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0213 23:11:49.074410  113618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:11:49.084128  113618 certs.go:56] Setting up /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356 for IP: 192.168.49.2
	I0213 23:11:49.084173  113618 certs.go:190] acquiring lock for shared ca certs: {Name:mkdb62e9ebaf532b9b3d230de7912db241faf3db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:49.084308  113618 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.key
	I0213 23:11:49.084378  113618 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.key
	I0213 23:11:49.084455  113618 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.key
	I0213 23:11:49.084479  113618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt with IP's: []
	I0213 23:11:49.240940  113618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt ...
	I0213 23:11:49.240981  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: {Name:mk201bd808faab8e08f2efabd5abc1f585873501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:49.241163  113618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.key ...
	I0213 23:11:49.241180  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.key: {Name:mk4f3b7ce89175b3537ab7c1113c7cfab28fad54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:49.241256  113618 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.key.dd3b5fb2
	I0213 23:11:49.241278  113618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 23:11:49.501137  113618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.crt.dd3b5fb2 ...
	I0213 23:11:49.501175  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.crt.dd3b5fb2: {Name:mkd5bfaf0092b6c3da75332fcf71a86e34035fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:49.501343  113618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.key.dd3b5fb2 ...
	I0213 23:11:49.501357  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.key.dd3b5fb2: {Name:mk0835cdd926ae61306c23f4e5623f1e7769fb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:49.501420  113618 certs.go:337] copying /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.crt
	I0213 23:11:49.501485  113618 certs.go:341] copying /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.key
	I0213 23:11:49.501535  113618 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.key
	I0213 23:11:49.501552  113618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.crt with IP's: []
	I0213 23:11:49.680429  113618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.crt ...
	I0213 23:11:49.680464  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.crt: {Name:mk291b99edc54aad22effbcb80bc66f135897cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:49.680628  113618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.key ...
	I0213 23:11:49.680642  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.key: {Name:mk8f11a310eba75fb93baf95605ca3e86998de60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:11:49.680711  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0213 23:11:49.680734  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0213 23:11:49.680744  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0213 23:11:49.680756  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0213 23:11:49.680766  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 23:11:49.680775  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 23:11:49.680788  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 23:11:49.680801  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 23:11:49.680854  113618 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/73453.pem (1338 bytes)
	W0213 23:11:49.680887  113618 certs.go:433] ignoring /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/73453_empty.pem, impossibly tiny 0 bytes
	I0213 23:11:49.680898  113618 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca-key.pem (1679 bytes)
	I0213 23:11:49.680927  113618 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:11:49.680953  113618 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:11:49.680977  113618 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/home/jenkins/minikube-integration/18169-66678/.minikube/certs/key.pem (1679 bytes)
	I0213 23:11:49.681020  113618 certs.go:437] found cert: /home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/ssl/certs/734532.pem (1708 bytes)
	I0213 23:11:49.681066  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/ssl/certs/734532.pem -> /usr/share/ca-certificates/734532.pem
	I0213 23:11:49.681097  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:11:49.681110  113618 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18169-66678/.minikube/certs/73453.pem -> /usr/share/ca-certificates/73453.pem
	I0213 23:11:49.681734  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:11:49.703443  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:11:49.724685  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:11:49.745644  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:11:49.767110  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:11:49.788436  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:11:49.809514  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:11:49.830661  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:11:49.851811  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/ssl/certs/734532.pem --> /usr/share/ca-certificates/734532.pem (1708 bytes)
	I0213 23:11:49.872738  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:11:49.893785  113618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18169-66678/.minikube/certs/73453.pem --> /usr/share/ca-certificates/73453.pem (1338 bytes)
	I0213 23:11:49.915154  113618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:11:49.931866  113618 ssh_runner.go:195] Run: openssl version
	I0213 23:11:49.937049  113618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:11:49.945788  113618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:11:49.949115  113618 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:11:49.949180  113618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:11:49.955487  113618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:11:49.963786  113618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73453.pem && ln -fs /usr/share/ca-certificates/73453.pem /etc/ssl/certs/73453.pem"
	I0213 23:11:49.972185  113618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73453.pem
	I0213 23:11:49.975317  113618 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:08 /usr/share/ca-certificates/73453.pem
	I0213 23:11:49.975376  113618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73453.pem
	I0213 23:11:49.981601  113618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73453.pem /etc/ssl/certs/51391683.0"
	I0213 23:11:49.989930  113618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/734532.pem && ln -fs /usr/share/ca-certificates/734532.pem /etc/ssl/certs/734532.pem"
	I0213 23:11:49.998151  113618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/734532.pem
	I0213 23:11:50.001267  113618 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:08 /usr/share/ca-certificates/734532.pem
	I0213 23:11:50.001314  113618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/734532.pem
	I0213 23:11:50.007410  113618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/734532.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:11:50.015799  113618 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:11:50.018769  113618 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 23:11:50.018826  113618 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-660356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-660356 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:11:50.018922  113618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:11:50.018970  113618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:11:50.051094  113618 cri.go:89] found id: ""
	I0213 23:11:50.051171  113618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:11:50.059494  113618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:11:50.067441  113618 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0213 23:11:50.067502  113618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:11:50.075118  113618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:11:50.075163  113618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0213 23:11:50.117051  113618 kubeadm.go:322] W0213 23:11:50.116546    1373 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0213 23:11:50.154318  113618 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
	I0213 23:11:50.221513  113618 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:11:52.619962  113618 kubeadm.go:322] W0213 23:11:52.619535    1373 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 23:11:52.620974  113618 kubeadm.go:322] W0213 23:11:52.620695    1373 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 23:12:01.079923  113618 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0213 23:12:01.080012  113618 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:12:01.080177  113618 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0213 23:12:01.080264  113618 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-gcp
	I0213 23:12:01.080333  113618 kubeadm.go:322] OS: Linux
	I0213 23:12:01.080411  113618 kubeadm.go:322] CGROUPS_CPU: enabled
	I0213 23:12:01.080492  113618 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0213 23:12:01.080572  113618 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0213 23:12:01.080646  113618 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0213 23:12:01.080718  113618 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0213 23:12:01.080794  113618 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0213 23:12:01.080909  113618 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:12:01.081045  113618 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:12:01.081190  113618 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:12:01.081319  113618 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:12:01.081431  113618 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:12:01.081493  113618 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:12:01.081592  113618 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:12:01.083463  113618 out.go:204]   - Generating certificates and keys ...
	I0213 23:12:01.083546  113618 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:12:01.083599  113618 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:12:01.083702  113618 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 23:12:01.083815  113618 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 23:12:01.083919  113618 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 23:12:01.083979  113618 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 23:12:01.084024  113618 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 23:12:01.084183  113618 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-660356 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 23:12:01.084229  113618 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 23:12:01.084398  113618 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-660356 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0213 23:12:01.084490  113618 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 23:12:01.084581  113618 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 23:12:01.084630  113618 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 23:12:01.084695  113618 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:12:01.084772  113618 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:12:01.084836  113618 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:12:01.084909  113618 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:12:01.084982  113618 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:12:01.085075  113618 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:12:01.086666  113618 out.go:204]   - Booting up control plane ...
	I0213 23:12:01.086747  113618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:12:01.086866  113618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:12:01.086923  113618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:12:01.087000  113618 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:12:01.087125  113618 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:12:01.087191  113618 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002453 seconds
	I0213 23:12:01.087324  113618 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:12:01.087496  113618 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:12:01.087579  113618 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:12:01.087727  113618 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-660356 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0213 23:12:01.087777  113618 kubeadm.go:322] [bootstrap-token] Using token: nuq4s0.ikkmkwx8vdtgl99j
	I0213 23:12:01.090398  113618 out.go:204]   - Configuring RBAC rules ...
	I0213 23:12:01.090499  113618 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:12:01.090613  113618 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:12:01.090766  113618 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:12:01.090895  113618 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:12:01.091057  113618 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:12:01.091189  113618 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:12:01.091328  113618 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:12:01.091412  113618 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:12:01.091483  113618 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:12:01.091492  113618 kubeadm.go:322] 
	I0213 23:12:01.091589  113618 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:12:01.091599  113618 kubeadm.go:322] 
	I0213 23:12:01.091702  113618 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:12:01.091711  113618 kubeadm.go:322] 
	I0213 23:12:01.091750  113618 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:12:01.091826  113618 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:12:01.091878  113618 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:12:01.091884  113618 kubeadm.go:322] 
	I0213 23:12:01.091925  113618 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:12:01.092000  113618 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:12:01.092078  113618 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:12:01.092084  113618 kubeadm.go:322] 
	I0213 23:12:01.092151  113618 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:12:01.092213  113618 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:12:01.092219  113618 kubeadm.go:322] 
	I0213 23:12:01.092291  113618 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nuq4s0.ikkmkwx8vdtgl99j \
	I0213 23:12:01.092434  113618 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:65a739a3fc766348b9b774a07bf25aabb4395eca8f80a3b593899c4975cd65db \
	I0213 23:12:01.092481  113618 kubeadm.go:322]     --control-plane 
	I0213 23:12:01.092491  113618 kubeadm.go:322] 
	I0213 23:12:01.092585  113618 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:12:01.092594  113618 kubeadm.go:322] 
	I0213 23:12:01.092671  113618 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nuq4s0.ikkmkwx8vdtgl99j \
	I0213 23:12:01.092778  113618 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:65a739a3fc766348b9b774a07bf25aabb4395eca8f80a3b593899c4975cd65db 
	I0213 23:12:01.092799  113618 cni.go:84] Creating CNI manager for ""
	I0213 23:12:01.092809  113618 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:12:01.094500  113618 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0213 23:12:01.095832  113618 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0213 23:12:01.099664  113618 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0213 23:12:01.099682  113618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0213 23:12:01.116245  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0213 23:12:01.522501  113618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:12:01.522598  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802 minikube.k8s.io/name=ingress-addon-legacy-660356 minikube.k8s.io/updated_at=2024_02_13T23_12_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:01.522600  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:01.675400  113618 ops.go:34] apiserver oom_adj: -16
	I0213 23:12:01.675522  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:02.175597  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:02.675864  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:03.176044  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:03.675943  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:04.176336  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:04.675581  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:05.176107  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:05.676371  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:06.176587  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:06.676173  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:07.175934  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:07.676408  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:08.176223  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:08.676512  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:09.175743  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:09.676408  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:10.176057  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:10.676463  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:11.176191  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:11.676245  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:12.176533  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:12.675873  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:13.176042  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:13.676046  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:14.176304  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:14.676449  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:15.176507  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:15.676098  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:16.176501  113618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:12:16.275578  113618 kubeadm.go:1088] duration metric: took 14.753060143s to wait for elevateKubeSystemPrivileges.
	I0213 23:12:16.275614  113618 kubeadm.go:406] StartCluster complete in 26.256794365s
	I0213 23:12:16.275633  113618 settings.go:142] acquiring lock: {Name:mk89817e7b00c42ae84864184d25a5290738d17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:12:16.275695  113618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:12:16.276489  113618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/kubeconfig: {Name:mk1392731503c3f5245f6110a90036e5311cfc32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:12:16.276740  113618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:12:16.276875  113618 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:12:16.276968  113618 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-660356"
	I0213 23:12:16.277001  113618 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-660356"
	I0213 23:12:16.277050  113618 config.go:182] Loaded profile config "ingress-addon-legacy-660356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0213 23:12:16.277085  113618 host.go:66] Checking if "ingress-addon-legacy-660356" exists ...
	I0213 23:12:16.277114  113618 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-660356"
	I0213 23:12:16.277136  113618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-660356"
	I0213 23:12:16.277537  113618 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-660356 --format={{.State.Status}}
	I0213 23:12:16.277634  113618 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-660356 --format={{.State.Status}}
	I0213 23:12:16.277525  113618 kapi.go:59] client config for ingress-addon-legacy-660356: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.key", CAFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 23:12:16.278412  113618 cert_rotation.go:137] Starting client certificate rotation controller
	I0213 23:12:16.304809  113618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:12:16.306787  113618 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:12:16.306814  113618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:12:16.306870  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:12:16.303613  113618 kapi.go:59] client config for ingress-addon-legacy-660356: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.key", CAFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 23:12:16.307270  113618 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-660356"
	I0213 23:12:16.307312  113618 host.go:66] Checking if "ingress-addon-legacy-660356" exists ...
	I0213 23:12:16.307890  113618 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-660356 --format={{.State.Status}}
	I0213 23:12:16.327668  113618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa Username:docker}
	I0213 23:12:16.328013  113618 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:12:16.328035  113618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:12:16.328085  113618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-660356
	I0213 23:12:16.343802  113618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/ingress-addon-legacy-660356/id_rsa Username:docker}
	I0213 23:12:16.477877  113618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:12:16.575742  113618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:12:16.582436  113618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:12:16.782250  113618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-660356" context rescaled to 1 replicas
	I0213 23:12:16.782294  113618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:12:16.784804  113618 out.go:177] * Verifying Kubernetes components...
	I0213 23:12:16.786965  113618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:12:16.800661  113618 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0213 23:12:16.961895  113618 kapi.go:59] client config for ingress-addon-legacy-660356: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt", KeyFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.key", CAFile:"/home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 23:12:16.962324  113618 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-660356" to be "Ready" ...
	I0213 23:12:16.968828  113618 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 23:12:16.970130  113618 addons.go:505] enable addons completed in 693.258515ms: enabled=[storage-provisioner default-storageclass]
	I0213 23:12:18.965690  113618 node_ready.go:58] node "ingress-addon-legacy-660356" has status "Ready":"False"
	I0213 23:12:20.968695  113618 node_ready.go:58] node "ingress-addon-legacy-660356" has status "Ready":"False"
	I0213 23:12:21.965365  113618 node_ready.go:49] node "ingress-addon-legacy-660356" has status "Ready":"True"
	I0213 23:12:21.965390  113618 node_ready.go:38] duration metric: took 5.003017463s waiting for node "ingress-addon-legacy-660356" to be "Ready" ...
	I0213 23:12:21.965400  113618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:12:21.971717  113618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:23.975182  113618 pod_ready.go:102] pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:12:15 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:12:25.975254  113618 pod_ready.go:102] pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:12:15 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:12:27.977791  113618 pod_ready.go:102] pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:30.478054  113618 pod_ready.go:102] pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.978327  113618 pod_ready.go:102] pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.978447  113618 pod_ready.go:102] pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:35.979513  113618 pod_ready.go:92] pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace has status "Ready":"True"
	I0213 23:12:35.979544  113618 pod_ready.go:81] duration metric: took 14.007795082s waiting for pod "coredns-66bff467f8-zw9c7" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.979559  113618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.984151  113618 pod_ready.go:92] pod "etcd-ingress-addon-legacy-660356" in "kube-system" namespace has status "Ready":"True"
	I0213 23:12:35.984174  113618 pod_ready.go:81] duration metric: took 4.607451ms waiting for pod "etcd-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.984188  113618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.988849  113618 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-660356" in "kube-system" namespace has status "Ready":"True"
	I0213 23:12:35.988870  113618 pod_ready.go:81] duration metric: took 4.676008ms waiting for pod "kube-apiserver-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.988879  113618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.993275  113618 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-660356" in "kube-system" namespace has status "Ready":"True"
	I0213 23:12:35.993299  113618 pod_ready.go:81] duration metric: took 4.413244ms waiting for pod "kube-controller-manager-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.993313  113618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5mlkd" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.997336  113618 pod_ready.go:92] pod "kube-proxy-5mlkd" in "kube-system" namespace has status "Ready":"True"
	I0213 23:12:35.997358  113618 pod_ready.go:81] duration metric: took 4.038256ms waiting for pod "kube-proxy-5mlkd" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:35.997369  113618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:36.172705  113618 request.go:629] Waited for 175.245002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-660356
	I0213 23:12:36.373458  113618 request.go:629] Waited for 198.125241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-660356
	I0213 23:12:36.376190  113618 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-660356" in "kube-system" namespace has status "Ready":"True"
	I0213 23:12:36.376213  113618 pod_ready.go:81] duration metric: took 378.836356ms waiting for pod "kube-scheduler-ingress-addon-legacy-660356" in "kube-system" namespace to be "Ready" ...
	I0213 23:12:36.376224  113618 pod_ready.go:38] duration metric: took 14.410813942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:12:36.376240  113618 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:12:36.376305  113618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:12:36.386904  113618 api_server.go:72] duration metric: took 19.604572846s to wait for apiserver process to appear ...
	I0213 23:12:36.386938  113618 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:12:36.386975  113618 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0213 23:12:36.391938  113618 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0213 23:12:36.392789  113618 api_server.go:141] control plane version: v1.18.20
	I0213 23:12:36.392814  113618 api_server.go:131] duration metric: took 5.869102ms to wait for apiserver health ...
	I0213 23:12:36.392823  113618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:12:36.573367  113618 request.go:629] Waited for 180.374263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0213 23:12:36.578624  113618 system_pods.go:59] 8 kube-system pods found
	I0213 23:12:36.578655  113618 system_pods.go:61] "coredns-66bff467f8-zw9c7" [05a7baf9-df97-43b5-b433-9bccaf5acef8] Running
	I0213 23:12:36.578660  113618 system_pods.go:61] "etcd-ingress-addon-legacy-660356" [ff624e0d-9d3b-4aee-aaac-daed972ca382] Running
	I0213 23:12:36.578664  113618 system_pods.go:61] "kindnet-69b75" [4b746d64-845d-4560-aa8a-deda9b940c75] Running
	I0213 23:12:36.578668  113618 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-660356" [52d582d5-95da-4f11-9136-e7118ed9bbe8] Running
	I0213 23:12:36.578672  113618 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-660356" [1a9870f3-3f24-419c-a3c2-364cefbbeef0] Running
	I0213 23:12:36.578676  113618 system_pods.go:61] "kube-proxy-5mlkd" [eb9be6b8-3a0d-462f-a225-124af423ebba] Running
	I0213 23:12:36.578680  113618 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-660356" [1a2bcbd9-4bda-44b0-afd7-c51834357b75] Running
	I0213 23:12:36.578684  113618 system_pods.go:61] "storage-provisioner" [7b7d538d-4a31-48cc-a93a-7676ba6700b5] Running
	I0213 23:12:36.578690  113618 system_pods.go:74] duration metric: took 185.861401ms to wait for pod list to return data ...
	I0213 23:12:36.578701  113618 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:12:36.773145  113618 request.go:629] Waited for 194.354518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0213 23:12:36.775479  113618 default_sa.go:45] found service account: "default"
	I0213 23:12:36.775507  113618 default_sa.go:55] duration metric: took 196.79731ms for default service account to be created ...
	I0213 23:12:36.775516  113618 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:12:36.972968  113618 request.go:629] Waited for 197.350681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0213 23:12:36.978652  113618 system_pods.go:86] 8 kube-system pods found
	I0213 23:12:36.978682  113618 system_pods.go:89] "coredns-66bff467f8-zw9c7" [05a7baf9-df97-43b5-b433-9bccaf5acef8] Running
	I0213 23:12:36.978690  113618 system_pods.go:89] "etcd-ingress-addon-legacy-660356" [ff624e0d-9d3b-4aee-aaac-daed972ca382] Running
	I0213 23:12:36.978695  113618 system_pods.go:89] "kindnet-69b75" [4b746d64-845d-4560-aa8a-deda9b940c75] Running
	I0213 23:12:36.978699  113618 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-660356" [52d582d5-95da-4f11-9136-e7118ed9bbe8] Running
	I0213 23:12:36.978703  113618 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-660356" [1a9870f3-3f24-419c-a3c2-364cefbbeef0] Running
	I0213 23:12:36.978707  113618 system_pods.go:89] "kube-proxy-5mlkd" [eb9be6b8-3a0d-462f-a225-124af423ebba] Running
	I0213 23:12:36.978711  113618 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-660356" [1a2bcbd9-4bda-44b0-afd7-c51834357b75] Running
	I0213 23:12:36.978715  113618 system_pods.go:89] "storage-provisioner" [7b7d538d-4a31-48cc-a93a-7676ba6700b5] Running
	I0213 23:12:36.978723  113618 system_pods.go:126] duration metric: took 203.201479ms to wait for k8s-apps to be running ...
	I0213 23:12:36.978737  113618 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:12:36.978784  113618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:12:36.989755  113618 system_svc.go:56] duration metric: took 11.006129ms WaitForService to wait for kubelet.
	I0213 23:12:36.989785  113618 kubeadm.go:581] duration metric: took 20.207461777s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:12:36.989821  113618 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:12:37.173249  113618 request.go:629] Waited for 183.345332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0213 23:12:37.176084  113618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0213 23:12:37.176113  113618 node_conditions.go:123] node cpu capacity is 8
	I0213 23:12:37.176126  113618 node_conditions.go:105] duration metric: took 186.298419ms to run NodePressure ...
	I0213 23:12:37.176139  113618 start.go:228] waiting for startup goroutines ...
	I0213 23:12:37.176147  113618 start.go:233] waiting for cluster config update ...
	I0213 23:12:37.176161  113618 start.go:242] writing updated cluster config ...
	I0213 23:12:37.176503  113618 ssh_runner.go:195] Run: rm -f paused
	I0213 23:12:37.224176  113618 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0213 23:12:37.226407  113618 out.go:177] 
	W0213 23:12:37.228061  113618 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0213 23:12:37.229506  113618 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0213 23:12:37.230839  113618 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-660356" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 13 23:15:28 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:28.352122903Z" level=info msg="Started container" PID=4871 containerID=defa1f15df60d924a843fe25b1471d6ec49aab904e6d09eba1d4f327902e3556 description=default/hello-world-app-5f5d8b66bb-nx8l4/hello-world-app id=230112ae-ffab-4838-b102-3efb90e1b08a name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=e67a3649f1cc82e6bfc3674440fadac0cf866812226a50f50a37aac93f5904ae
	Feb 13 23:15:39 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:39.366680454Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=7bfc6b67-d4e1-4b3b-8a01-fe4417c6c5a3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Feb 13 23:15:43 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:43.367215480Z" level=info msg="Stopping pod sandbox: e4f7aef9a299d6c8ccd63d33463df3e61266a3abbf5fef9ed3a0168b22041eec" id=c5814a2a-184a-4f00-a1d4-012fc507d758 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:43 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:43.368255882Z" level=info msg="Stopped pod sandbox: e4f7aef9a299d6c8ccd63d33463df3e61266a3abbf5fef9ed3a0168b22041eec" id=c5814a2a-184a-4f00-a1d4-012fc507d758 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:43 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:43.747748311Z" level=info msg="Stopping pod sandbox: e4f7aef9a299d6c8ccd63d33463df3e61266a3abbf5fef9ed3a0168b22041eec" id=1976688e-4d12-4ef6-bb61-fdb627eadda6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:43 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:43.747830386Z" level=info msg="Stopped pod sandbox (already stopped): e4f7aef9a299d6c8ccd63d33463df3e61266a3abbf5fef9ed3a0168b22041eec" id=1976688e-4d12-4ef6-bb61-fdb627eadda6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:44 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:44.510884272Z" level=info msg="Stopping container: 4579e657e96074cc45fca02255494759ef4c1d9f4b7b94a30ec0a8b0062a2510 (timeout: 2s)" id=61772f7b-be8d-4ef8-afcb-f39633c237fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 13 23:15:44 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:44.513073481Z" level=info msg="Stopping container: 4579e657e96074cc45fca02255494759ef4c1d9f4b7b94a30ec0a8b0062a2510 (timeout: 2s)" id=e3820a18-4905-4500-9388-e3184e3c90ea name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 13 23:15:45 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:45.366108353Z" level=info msg="Stopping pod sandbox: e4f7aef9a299d6c8ccd63d33463df3e61266a3abbf5fef9ed3a0168b22041eec" id=00407e47-abf3-4c66-9d25-4ef0e183c488 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:45 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:45.366164651Z" level=info msg="Stopped pod sandbox (already stopped): e4f7aef9a299d6c8ccd63d33463df3e61266a3abbf5fef9ed3a0168b22041eec" id=00407e47-abf3-4c66-9d25-4ef0e183c488 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.519298579Z" level=warning msg="Stopping container 4579e657e96074cc45fca02255494759ef4c1d9f4b7b94a30ec0a8b0062a2510 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=61772f7b-be8d-4ef8-afcb-f39633c237fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 13 23:15:46 ingress-addon-legacy-660356 conmon[3411]: conmon 4579e657e96074cc45fc <ninfo>: container 3423 exited with status 137
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.664242394Z" level=info msg="Stopped container 4579e657e96074cc45fca02255494759ef4c1d9f4b7b94a30ec0a8b0062a2510: ingress-nginx/ingress-nginx-controller-7fcf777cb7-sqtzz/controller" id=61772f7b-be8d-4ef8-afcb-f39633c237fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.664278384Z" level=info msg="Stopped container 4579e657e96074cc45fca02255494759ef4c1d9f4b7b94a30ec0a8b0062a2510: ingress-nginx/ingress-nginx-controller-7fcf777cb7-sqtzz/controller" id=e3820a18-4905-4500-9388-e3184e3c90ea name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.664965670Z" level=info msg="Stopping pod sandbox: f02c5f0dc450d5f265516bdebf993c1a87c96423223bf533ea8c1c599ddd71a5" id=cbd6b364-f8a9-4f44-a5ae-aa890d370f32 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.664978975Z" level=info msg="Stopping pod sandbox: f02c5f0dc450d5f265516bdebf993c1a87c96423223bf533ea8c1c599ddd71a5" id=a5c175a1-8249-4b0f-a52e-17195cbf202e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.667674830Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-5JTJ24MQR2BCUF2Z - [0:0]\n:KUBE-HP-Y3VBHW4W2I3IIY3T - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-Y3VBHW4W2I3IIY3T\n-X KUBE-HP-5JTJ24MQR2BCUF2Z\nCOMMIT\n"
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.669009452Z" level=info msg="Closing host port tcp:80"
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.669046193Z" level=info msg="Closing host port tcp:443"
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.670084551Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.670105658Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.670240057Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-sqtzz Namespace:ingress-nginx ID:f02c5f0dc450d5f265516bdebf993c1a87c96423223bf533ea8c1c599ddd71a5 UID:fbd92a75-16e8-46d0-aa1e-69d13f6e40ec NetNS:/var/run/netns/6dd0c5c1-748c-4285-959e-f5bb8cc6a841 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.670359150Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-sqtzz from CNI network \"kindnet\" (type=ptp)"
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.701574217Z" level=info msg="Stopped pod sandbox: f02c5f0dc450d5f265516bdebf993c1a87c96423223bf533ea8c1c599ddd71a5" id=cbd6b364-f8a9-4f44-a5ae-aa890d370f32 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 13 23:15:46 ingress-addon-legacy-660356 crio[956]: time="2024-02-13 23:15:46.701695797Z" level=info msg="Stopped pod sandbox (already stopped): f02c5f0dc450d5f265516bdebf993c1a87c96423223bf533ea8c1c599ddd71a5" id=a5c175a1-8249-4b0f-a52e-17195cbf202e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	defa1f15df60d       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   e67a3649f1cc8       hello-world-app-5f5d8b66bb-nx8l4
	261971dbaa60c       docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027                    2 minutes ago       Running             nginx                     0                   e14637823cc0a       nginx
	4579e657e9607       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   f02c5f0dc450d       ingress-nginx-controller-7fcf777cb7-sqtzz
	56169a3481cc6       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   c4bcc41dedff5       ingress-nginx-admission-patch-h8m65
	e6424699062c2       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   74676c8f08dff       ingress-nginx-admission-create-68pw2
	0cd0588189169       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   e514c2b2f6c70       coredns-66bff467f8-zw9c7
	2d8a743c3243a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   9844158d980ac       storage-provisioner
	fc8528c22d4bb       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   a810df57885ce       kindnet-69b75
	70c2ec6f7bdae       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   a4d2a14fa0c0f       kube-proxy-5mlkd
	7aa1a0ed6e302       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   e7f39102c6ab4       kube-scheduler-ingress-addon-legacy-660356
	3484ab0b263a4       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   7cde062f874ee       kube-apiserver-ingress-addon-legacy-660356
	42691649ecc5c       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   cbb3205f71a51       kube-controller-manager-ingress-addon-legacy-660356
	77c7da339e1ce       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   87aebb28f815e       etcd-ingress-addon-legacy-660356
	
	
	==> coredns [0cd05881891693a0d447e446bbaadd6f7cd644be1dd7980b3fd21b3f35db59e8] <==
	[INFO] 10.244.0.5:42456 - 64123 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002571034s
	[INFO] 10.244.0.5:46989 - 62960 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006338033s
	[INFO] 10.244.0.5:54156 - 49908 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006482171s
	[INFO] 10.244.0.5:58054 - 13558 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006339944s
	[INFO] 10.244.0.5:34085 - 50062 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006505275s
	[INFO] 10.244.0.5:53706 - 65026 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006254016s
	[INFO] 10.244.0.5:42456 - 9609 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006425919s
	[INFO] 10.244.0.5:51266 - 46580 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006740311s
	[INFO] 10.244.0.5:54965 - 57922 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006568207s
	[INFO] 10.244.0.5:42456 - 62074 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007814471s
	[INFO] 10.244.0.5:53706 - 4344 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007799141s
	[INFO] 10.244.0.5:34085 - 17424 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007858632s
	[INFO] 10.244.0.5:51266 - 33899 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00768119s
	[INFO] 10.244.0.5:54965 - 43841 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007779125s
	[INFO] 10.244.0.5:46989 - 59087 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008066746s
	[INFO] 10.244.0.5:42456 - 51573 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006816s
	[INFO] 10.244.0.5:54156 - 47815 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008039392s
	[INFO] 10.244.0.5:58054 - 55975 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008182107s
	[INFO] 10.244.0.5:53706 - 38963 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005946s
	[INFO] 10.244.0.5:46989 - 31814 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052012s
	[INFO] 10.244.0.5:54965 - 61412 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004303s
	[INFO] 10.244.0.5:51266 - 19671 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050705s
	[INFO] 10.244.0.5:58054 - 43318 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054426s
	[INFO] 10.244.0.5:34085 - 63892 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000134996s
	[INFO] 10.244.0.5:54156 - 59279 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000213179s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-660356
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-660356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90664111bc55fed26ce3e984eae935c06b114802
	                    minikube.k8s.io/name=ingress-addon-legacy-660356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_12_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:11:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-660356
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:15:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:15:31 +0000   Tue, 13 Feb 2024 23:11:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:15:31 +0000   Tue, 13 Feb 2024 23:11:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:15:31 +0000   Tue, 13 Feb 2024 23:11:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:15:31 +0000   Tue, 13 Feb 2024 23:12:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-660356
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b48aadce9a4fe0885eb49bcb5c7003
	  System UUID:                a3911f1d-90f0-4a3f-b22d-cc0a557a53e6
	  Boot ID:                    997a1092-3efa-483b-88f8-21b3b3d49d89
	  Kernel Version:             5.15.0-1051-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-nx8l4                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-zw9c7                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m37s
	  kube-system                 etcd-ingress-addon-legacy-660356                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kindnet-69b75                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m37s
	  kube-system                 kube-apiserver-ingress-addon-legacy-660356             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-660356    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-proxy-5mlkd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-scheduler-ingress-addon-legacy-660356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m51s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s  kubelet     Node ingress-addon-legacy-660356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s  kubelet     Node ingress-addon-legacy-660356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s  kubelet     Node ingress-addon-legacy-660356 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m36s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m31s  kubelet     Node ingress-addon-legacy-660356 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006591] FS-Cache: N-cookie d=0000000057408085{9p.inode} n=00000000db312944
	[  +0.007351] FS-Cache: N-key=[8] '41a20f0200000000'
	[  +0.286432] FS-Cache: Duplicate cookie detected
	[  +0.004686] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006740] FS-Cache: O-cookie d=0000000057408085{9p.inode} n=0000000041d110d5
	[  +0.007353] FS-Cache: O-key=[8] '47a20f0200000000'
	[  +0.004966] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006572] FS-Cache: N-cookie d=0000000057408085{9p.inode} n=00000000823f37cb
	[  +0.007344] FS-Cache: N-key=[8] '47a20f0200000000'
	[Feb13 23:11] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb13 23:13] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 76 1c 09 b4 6f 12 66 bb 04 2a c4 d1 08 00
	[  +1.015676] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 76 1c 09 b4 6f 12 66 bb 04 2a c4 d1 08 00
	[  +2.015800] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 1c 09 b4 6f 12 66 bb 04 2a c4 d1 08 00
	[  +4.127574] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 76 1c 09 b4 6f 12 66 bb 04 2a c4 d1 08 00
	[  +8.191199] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 76 1c 09 b4 6f 12 66 bb 04 2a c4 d1 08 00
	[ +16.126406] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 76 1c 09 b4 6f 12 66 bb 04 2a c4 d1 08 00
	[Feb13 23:14] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 76 1c 09 b4 6f 12 66 bb 04 2a c4 d1 08 00
	
	
	==> etcd [77c7da339e1ce03528da25f91a156016c5291ce719151b7c21011bc4b07b075b] <==
	raft2024/02/13 23:11:54 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/02/13 23:11:54 INFO: aec36adc501070cc became follower at term 1
	raft2024/02/13 23:11:54 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-13 23:11:54.190080 W | auth: simple token is not cryptographically signed
	2024-02-13 23:11:54.193370 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-13 23:11:54.193861 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/13 23:11:54 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-13 23:11:54.194423 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-02-13 23:11:54.195690 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-13 23:11:54.195784 I | embed: listening for peers on 192.168.49.2:2380
	2024-02-13 23:11:54.195979 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/02/13 23:11:54 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/02/13 23:11:54 INFO: aec36adc501070cc became candidate at term 2
	raft2024/02/13 23:11:54 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/02/13 23:11:54 INFO: aec36adc501070cc became leader at term 2
	raft2024/02/13 23:11:54 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-02-13 23:11:54.281380 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-13 23:11:54.282362 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-13 23:11:54.282685 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-13 23:11:54.282734 I | etcdserver: published {Name:ingress-addon-legacy-660356 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-02-13 23:11:54.282765 I | embed: ready to serve client requests
	2024-02-13 23:11:54.283021 I | embed: ready to serve client requests
	2024-02-13 23:11:54.284403 I | embed: serving client requests on 192.168.49.2:2379
	2024-02-13 23:11:54.284977 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-13 23:12:22.676450 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (121.736915ms) to execute
	
	
	==> kernel <==
	 23:15:52 up  1:58,  0 users,  load average: 0.17, 1.08, 1.38
	Linux ingress-addon-legacy-660356 5.15.0-1051-gcp #59~20.04.1-Ubuntu SMP Thu Jan 25 02:51:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [fc8528c22d4bb85f1687a59cafd74d57eb01294fd20e595ffd24b4d24f280d86] <==
	I0213 23:13:49.427577       1 main.go:227] handling current node
	I0213 23:13:59.436495       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:13:59.436523       1 main.go:227] handling current node
	I0213 23:14:09.439872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:14:09.439897       1 main.go:227] handling current node
	I0213 23:14:19.451328       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:14:19.451353       1 main.go:227] handling current node
	I0213 23:14:29.454976       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:14:29.455003       1 main.go:227] handling current node
	I0213 23:14:39.467180       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:14:39.467205       1 main.go:227] handling current node
	I0213 23:14:49.471426       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:14:49.471455       1 main.go:227] handling current node
	I0213 23:14:59.483416       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:14:59.483439       1 main.go:227] handling current node
	I0213 23:15:09.487199       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:15:09.487225       1 main.go:227] handling current node
	I0213 23:15:19.490955       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:15:19.490992       1 main.go:227] handling current node
	I0213 23:15:29.495505       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:15:29.495536       1 main.go:227] handling current node
	I0213 23:15:39.507733       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:15:39.507756       1 main.go:227] handling current node
	I0213 23:15:49.511581       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0213 23:15:49.511605       1 main.go:227] handling current node
	
	
	==> kube-apiserver [3484ab0b263a44069eefbcd45ec93d9cf92966bc090f0b83952ebc25c9413322] <==
	I0213 23:11:58.000406       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 23:11:58.000846       1 cache.go:39] Caches are synced for autoregister controller
	I0213 23:11:58.001228       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0213 23:11:58.001228       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0213 23:11:58.060507       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0213 23:11:58.899547       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0213 23:11:58.899684       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0213 23:11:58.904312       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0213 23:11:58.907073       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0213 23:11:58.907091       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0213 23:11:59.185606       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 23:11:59.216850       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0213 23:11:59.291503       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0213 23:11:59.292428       1 controller.go:609] quota admission added evaluator for: endpoints
	I0213 23:11:59.295379       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 23:12:00.232802       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0213 23:12:00.927517       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0213 23:12:01.067923       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0213 23:12:01.328186       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0213 23:12:15.793710       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0213 23:12:15.939088       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0213 23:12:37.922176       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0213 23:13:04.669153       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0213 23:15:44.367660       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E0213 23:15:44.520716       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [42691649ecc5cedd6d095e100b679d2c16342646de423772f5d011a8be9f138a] <==
	I0213 23:12:15.886313       1 disruption.go:339] Sending events to api server.
	I0213 23:12:15.936683       1 shared_informer.go:230] Caches are synced for deployment 
	I0213 23:12:15.941059       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1f282350-8355-47b9-9bc8-c147eeee75fe", APIVersion:"apps/v1", ResourceVersion:"205", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0213 23:12:15.945598       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"00fe171f-2ef7-41b9-bb42-ef7e46dd7495", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-d97rg
	I0213 23:12:15.950792       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"00fe171f-2ef7-41b9-bb42-ef7e46dd7495", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-zw9c7
	I0213 23:12:15.986744       1 shared_informer.go:230] Caches are synced for endpoint 
	I0213 23:12:16.130859       1 shared_informer.go:230] Caches are synced for HPA 
	I0213 23:12:16.142869       1 shared_informer.go:230] Caches are synced for resource quota 
	I0213 23:12:16.186615       1 shared_informer.go:230] Caches are synced for resource quota 
	I0213 23:12:16.188957       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0213 23:12:16.194438       1 shared_informer.go:230] Caches are synced for attach detach 
	I0213 23:12:16.285216       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0213 23:12:16.285241       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0213 23:12:16.306232       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1f282350-8355-47b9-9bc8-c147eeee75fe", APIVersion:"apps/v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0213 23:12:16.365986       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"00fe171f-2ef7-41b9-bb42-ef7e46dd7495", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-d97rg
	I0213 23:12:25.782041       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0213 23:12:25.782896       1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod kube-system/storage-provisioner
	I0213 23:12:37.915547       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3c9b83c4-a184-44fc-a0db-3cfe9eb32be2", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0213 23:12:37.920798       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"61500867-19f2-4f6f-81aa-8c8bb70be3a5", APIVersion:"apps/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-sqtzz
	I0213 23:12:37.967808       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c0fa1ef0-7dcc-47de-b298-bf8574fa82ce", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-68pw2
	I0213 23:12:37.982106       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"cd342e0a-ed26-4c41-8732-2eeb75bd82da", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-h8m65
	I0213 23:12:41.439165       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c0fa1ef0-7dcc-47de-b298-bf8574fa82ce", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0213 23:12:41.445150       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"cd342e0a-ed26-4c41-8732-2eeb75bd82da", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0213 23:15:26.370455       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"137f2c27-554c-4fd7-ac8c-b430463158c0", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0213 23:15:26.375257       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"fb16faa8-a206-463d-a646-ba4aa132bd36", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-nx8l4
	
	
	==> kube-proxy [70c2ec6f7bdae2862d2c05acca0d006bb8ddd7b73589efa8e93fd7f089b15bdf] <==
	W0213 23:12:16.371791       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0213 23:12:16.383108       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0213 23:12:16.383144       1 server_others.go:186] Using iptables Proxier.
	I0213 23:12:16.383451       1 server.go:583] Version: v1.18.20
	I0213 23:12:16.384007       1 config.go:133] Starting endpoints config controller
	I0213 23:12:16.384047       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0213 23:12:16.384144       1 config.go:315] Starting service config controller
	I0213 23:12:16.384156       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0213 23:12:16.484218       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0213 23:12:16.484280       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [7aa1a0ed6e3025e4c416eacad6a32f80247018b30a45c871fd94bec12b53b8c1] <==
	W0213 23:11:57.968802       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:11:57.968829       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 23:11:57.968836       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 23:11:57.980194       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0213 23:11:57.980216       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0213 23:11:57.982063       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 23:11:57.982098       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 23:11:57.982443       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0213 23:11:57.982478       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0213 23:11:57.983323       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:11:57.984505       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 23:11:57.984769       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:11:57.984954       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:11:57.984987       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:11:57.985144       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:11:57.985170       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 23:11:57.985186       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 23:11:57.985211       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:11:57.985349       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 23:11:57.985411       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:11:57.985599       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 23:11:58.853173       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:11:58.995590       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 23:11:59.033887       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0213 23:11:59.282277       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Feb 13 23:15:09 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:09.366919    1854 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 13 23:15:09 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:09.366944    1854 pod_workers.go:191] Error syncing pod 1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e ("kube-ingress-dns-minikube_kube-system(1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 13 23:15:24 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:24.366773    1854 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 13 23:15:24 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:24.366830    1854 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 13 23:15:24 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:24.366890    1854 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 13 23:15:24 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:24.366930    1854 pod_workers.go:191] Error syncing pod 1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e ("kube-ingress-dns-minikube_kube-system(1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 13 23:15:26 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:26.381366    1854 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Feb 13 23:15:26 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:26.496149    1854 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rgklb" (UniqueName: "kubernetes.io/secret/0ec16ca3-a9ec-4841-9512-134fa8e18871-default-token-rgklb") pod "hello-world-app-5f5d8b66bb-nx8l4" (UID: "0ec16ca3-a9ec-4841-9512-134fa8e18871")
	Feb 13 23:15:26 ingress-addon-legacy-660356 kubelet[1854]: W0213 23:15:26.711996    1854 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/613fc374591f8dceed0dd6bbbbe1ea1ac5682275d59aaa64bb28a6f53a35e0e9/crio-e67a3649f1cc82e6bfc3674440fadac0cf866812226a50f50a37aac93f5904ae WatchSource:0}: Error finding container e67a3649f1cc82e6bfc3674440fadac0cf866812226a50f50a37aac93f5904ae: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0001e4ea0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Feb 13 23:15:39 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:39.367013    1854 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 13 23:15:39 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:39.367061    1854 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 13 23:15:39 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:39.367117    1854 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 13 23:15:39 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:39.367150    1854 pod_workers.go:191] Error syncing pod 1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e ("kube-ingress-dns-minikube_kube-system(1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 13 23:15:42 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:42.196123    1854 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-fbjtm" (UniqueName: "kubernetes.io/secret/1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e-minikube-ingress-dns-token-fbjtm") pod "1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e" (UID: "1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e")
	Feb 13 23:15:42 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:42.198280    1854 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e-minikube-ingress-dns-token-fbjtm" (OuterVolumeSpecName: "minikube-ingress-dns-token-fbjtm") pod "1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e" (UID: "1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e"). InnerVolumeSpecName "minikube-ingress-dns-token-fbjtm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 23:15:42 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:42.296481    1854 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-fbjtm" (UniqueName: "kubernetes.io/secret/1c09a3ba-29bc-4f5c-8b7f-9571c4f9d16e-minikube-ingress-dns-token-fbjtm") on node "ingress-addon-legacy-660356" DevicePath ""
	Feb 13 23:15:44 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:44.511981    1854 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-sqtzz.17b38f35b376b1c3", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-sqtzz", UID:"fbd92a75-16e8-46d0-aa1e-69d13f6e40ec", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-660356"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b1be81e6d71c3, ext:223619899957, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b1be81e6d71c3, ext:223619899957, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-sqtzz.17b38f35b376b1c3" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 13 23:15:44 ingress-addon-legacy-660356 kubelet[1854]: E0213 23:15:44.515489    1854 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-sqtzz.17b38f35b376b1c3", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-sqtzz", UID:"fbd92a75-16e8-46d0-aa1e-69d13f6e40ec", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-660356"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b1be81e6d71c3, ext:223619899957, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b1be81e90714b, ext:223622193602, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-sqtzz.17b38f35b376b1c3" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 13 23:15:46 ingress-addon-legacy-660356 kubelet[1854]: W0213 23:15:46.743591    1854 pod_container_deletor.go:77] Container "f02c5f0dc450d5f265516bdebf993c1a87c96423223bf533ea8c1c599ddd71a5" not found in pod's containers
	Feb 13 23:15:48 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:48.670863    1854 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-z46xp" (UniqueName: "kubernetes.io/secret/fbd92a75-16e8-46d0-aa1e-69d13f6e40ec-ingress-nginx-token-z46xp") pod "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec" (UID: "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec")
	Feb 13 23:15:48 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:48.670927    1854 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/fbd92a75-16e8-46d0-aa1e-69d13f6e40ec-webhook-cert") pod "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec" (UID: "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec")
	Feb 13 23:15:48 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:48.672920    1854 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd92a75-16e8-46d0-aa1e-69d13f6e40ec-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec" (UID: "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 23:15:48 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:48.673103    1854 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd92a75-16e8-46d0-aa1e-69d13f6e40ec-ingress-nginx-token-z46xp" (OuterVolumeSpecName: "ingress-nginx-token-z46xp") pod "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec" (UID: "fbd92a75-16e8-46d0-aa1e-69d13f6e40ec"). InnerVolumeSpecName "ingress-nginx-token-z46xp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 23:15:48 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:48.771252    1854 reconciler.go:319] Volume detached for volume "ingress-nginx-token-z46xp" (UniqueName: "kubernetes.io/secret/fbd92a75-16e8-46d0-aa1e-69d13f6e40ec-ingress-nginx-token-z46xp") on node "ingress-addon-legacy-660356" DevicePath ""
	Feb 13 23:15:48 ingress-addon-legacy-660356 kubelet[1854]: I0213 23:15:48.771295    1854 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/fbd92a75-16e8-46d0-aa1e-69d13f6e40ec-webhook-cert") on node "ingress-addon-legacy-660356" DevicePath ""
	
	
	==> storage-provisioner [2d8a743c3243ab00968f00f88aa1af07c476b52c4e2397f204eef8b404053264] <==
	I0213 23:12:22.497340       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:12:22.504789       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:12:22.504829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:12:22.547135       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:12:22.547260       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1fabff94-080d-4757-9493-e8159fc629bb", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-660356_17058c49-e959-45f4-a51b-4a018dc70e5c became leader
	I0213 23:12:22.547370       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-660356_17058c49-e959-45f4-a51b-4a018dc70e5c!
	I0213 23:12:22.648428       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-660356_17058c49-e959-45f4-a51b-4a018dc70e5c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-660356 -n ingress-addon-legacy-660356
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-660356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.78s)

                                                
                                    

Test pass (290/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.21
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 6.34
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 8.1
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
29 TestDownloadOnlyKic 1.29
30 TestBinaryMirror 0.74
31 TestOffline 88.52
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 155.25
38 TestAddons/parallel/Registry 15.9
40 TestAddons/parallel/InspektorGadget 11.8
41 TestAddons/parallel/MetricsServer 7.53
42 TestAddons/parallel/HelmTiller 9.98
44 TestAddons/parallel/CSI 55.12
45 TestAddons/parallel/Headlamp 12.4
46 TestAddons/parallel/CloudSpanner 5.49
47 TestAddons/parallel/LocalPath 10.13
48 TestAddons/parallel/NvidiaDevicePlugin 5.47
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 12.1
54 TestCertOptions 27.65
55 TestCertExpiration 241.74
57 TestForceSystemdFlag 30.15
58 TestForceSystemdEnv 28.85
60 TestKVMDriverInstallOrUpdate 3.33
64 TestErrorSpam/setup 23.73
65 TestErrorSpam/start 0.63
66 TestErrorSpam/status 0.92
67 TestErrorSpam/pause 1.52
68 TestErrorSpam/unpause 1.52
69 TestErrorSpam/stop 1.38
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 68.8
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.37
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.98
81 TestFunctional/serial/CacheCmd/cache/add_local 1.15
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 31.67
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.37
92 TestFunctional/serial/LogsFileCmd 1.4
93 TestFunctional/serial/InvalidService 4.62
95 TestFunctional/parallel/ConfigCmd 0.49
96 TestFunctional/parallel/DashboardCmd 13.37
97 TestFunctional/parallel/DryRun 0.51
98 TestFunctional/parallel/InternationalLanguage 0.19
99 TestFunctional/parallel/StatusCmd 1.16
103 TestFunctional/parallel/ServiceCmdConnect 7.69
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 32.84
107 TestFunctional/parallel/SSHCmd 0.64
108 TestFunctional/parallel/CpCmd 2.27
109 TestFunctional/parallel/MySQL 19.63
110 TestFunctional/parallel/FileSync 0.3
111 TestFunctional/parallel/CertSync 2.22
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
119 TestFunctional/parallel/License 0.21
120 TestFunctional/parallel/ServiceCmd/DeployApp 9.25
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/Version/components 0.5
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.45
128 TestFunctional/parallel/ImageCommands/Setup 1.13
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.52
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
134 TestFunctional/parallel/ProfileCmd/profile_list 0.44
135 TestFunctional/parallel/MountCmd/any-port 15.2
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 8.38
138 TestFunctional/parallel/ServiceCmd/List 0.53
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
141 TestFunctional/parallel/ServiceCmd/Format 0.41
142 TestFunctional/parallel/ServiceCmd/URL 0.4
144 TestFunctional/parallel/MountCmd/specific-port 2.55
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.24
147 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
148 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.31
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.96
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.47
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.1
155 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
156 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
160 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 69.83
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.32
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.65
174 TestJSONOutput/start/Command 68.76
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.66
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.59
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.74
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.23
199 TestKicCustomNetwork/create_custom_network 34.05
200 TestKicCustomNetwork/use_default_bridge_network 27.25
201 TestKicExistingNetwork 26.08
202 TestKicCustomSubnet 24.91
203 TestKicStaticIP 26.64
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 52.86
208 TestMountStart/serial/StartWithMountFirst 5.24
209 TestMountStart/serial/VerifyMountFirst 0.26
210 TestMountStart/serial/StartWithMountSecond 5.49
211 TestMountStart/serial/VerifyMountSecond 0.26
212 TestMountStart/serial/DeleteFirst 1.6
213 TestMountStart/serial/VerifyMountPostDelete 0.27
214 TestMountStart/serial/Stop 1.18
215 TestMountStart/serial/RestartStopped 6.95
216 TestMountStart/serial/VerifyMountPostStop 0.25
219 TestMultiNode/serial/FreshStart2Nodes 86.5
220 TestMultiNode/serial/DeployApp2Nodes 4.8
221 TestMultiNode/serial/PingHostFrom2Pods 0.79
222 TestMultiNode/serial/AddNode 16.61
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.28
225 TestMultiNode/serial/CopyFile 9.39
226 TestMultiNode/serial/StopNode 2.13
227 TestMultiNode/serial/StartAfterStop 11.56
228 TestMultiNode/serial/RestartKeepsNodes 111.81
229 TestMultiNode/serial/DeleteNode 4.68
230 TestMultiNode/serial/StopMultiNode 23.69
231 TestMultiNode/serial/RestartMultiNode 73.61
232 TestMultiNode/serial/ValidateNameConflict 23.1
237 TestPreload 130.39
239 TestScheduledStopUnix 97.14
242 TestInsufficientStorage 12.98
243 TestRunningBinaryUpgrade 88.26
245 TestKubernetesUpgrade 371.24
246 TestMissingContainerUpgrade 91.42
247 TestStoppedBinaryUpgrade/Setup 0.54
248 TestStoppedBinaryUpgrade/Upgrade 87.79
257 TestPause/serial/Start 69.63
258 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
261 TestNoKubernetes/serial/StartWithK8s 31.49
269 TestNetworkPlugins/group/false 3.68
273 TestNoKubernetes/serial/StartWithStopK8s 16.84
274 TestNoKubernetes/serial/Start 7.22
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
276 TestNoKubernetes/serial/ProfileList 1.22
277 TestNoKubernetes/serial/Stop 1.19
278 TestNoKubernetes/serial/StartNoArgs 6.14
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
280 TestPause/serial/SecondStartNoReconfiguration 41.55
281 TestPause/serial/Pause 0.91
282 TestPause/serial/VerifyStatus 0.31
283 TestPause/serial/Unpause 0.74
284 TestPause/serial/PauseAgain 0.9
285 TestPause/serial/DeletePaused 2.72
286 TestPause/serial/VerifyDeletedResources 17.96
288 TestStartStop/group/old-k8s-version/serial/FirstStart 118.91
290 TestStartStop/group/no-preload/serial/FirstStart 51.01
291 TestStartStop/group/no-preload/serial/DeployApp 7.33
292 TestStartStop/group/old-k8s-version/serial/DeployApp 7.59
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
294 TestStartStop/group/no-preload/serial/Stop 11.87
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
296 TestStartStop/group/old-k8s-version/serial/Stop 11.88
297 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
298 TestStartStop/group/no-preload/serial/SecondStart 339.96
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
300 TestStartStop/group/old-k8s-version/serial/SecondStart 429.65
302 TestStartStop/group/embed-certs/serial/FirstStart 75.15
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.84
305 TestStartStop/group/embed-certs/serial/DeployApp 8.26
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
307 TestStartStop/group/embed-certs/serial/Stop 11.84
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.26
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/embed-certs/serial/SecondStart 339.84
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.86
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 341.45
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
318 TestStartStop/group/no-preload/serial/Pause 2.98
320 TestStartStop/group/newest-cni/serial/FirstStart 35.59
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
323 TestStartStop/group/newest-cni/serial/Stop 11.84
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/newest-cni/serial/SecondStart 26.52
326 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
330 TestStartStop/group/newest-cni/serial/Pause 2.77
331 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
332 TestNetworkPlugins/group/auto/Start 74.12
333 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
334 TestStartStop/group/old-k8s-version/serial/Pause 3.75
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
336 TestNetworkPlugins/group/kindnet/Start 68.91
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
339 TestStartStop/group/embed-certs/serial/Pause 3.54
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.05
341 TestNetworkPlugins/group/calico/Start 64.44
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.18
345 TestNetworkPlugins/group/custom-flannel/Start 55.27
346 TestNetworkPlugins/group/auto/KubeletFlags 0.33
347 TestNetworkPlugins/group/auto/NetCatPod 10.2
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/auto/DNS 0.13
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
351 TestNetworkPlugins/group/auto/Localhost 0.15
352 TestNetworkPlugins/group/auto/HairPin 0.14
353 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/kindnet/DNS 0.15
356 TestNetworkPlugins/group/kindnet/Localhost 0.12
357 TestNetworkPlugins/group/kindnet/HairPin 0.13
358 TestNetworkPlugins/group/calico/KubeletFlags 0.3
359 TestNetworkPlugins/group/calico/NetCatPod 10.2
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
362 TestNetworkPlugins/group/enable-default-cni/Start 83.05
363 TestNetworkPlugins/group/calico/DNS 0.16
364 TestNetworkPlugins/group/calico/Localhost 0.13
365 TestNetworkPlugins/group/calico/HairPin 0.13
366 TestNetworkPlugins/group/custom-flannel/DNS 0.13
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
369 TestNetworkPlugins/group/flannel/Start 60.55
370 TestNetworkPlugins/group/bridge/Start 76.24
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
373 TestNetworkPlugins/group/flannel/NetCatPod 9.17
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.17
376 TestNetworkPlugins/group/flannel/DNS 0.16
377 TestNetworkPlugins/group/flannel/Localhost 0.12
378 TestNetworkPlugins/group/flannel/HairPin 0.12
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
383 TestNetworkPlugins/group/bridge/NetCatPod 25.23
384 TestNetworkPlugins/group/bridge/DNS 0.12
385 TestNetworkPlugins/group/bridge/Localhost 0.1
386 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (8.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-844459 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-844459 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.740819246s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-844459
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-844459: exit status 85 (76.801658ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-844459 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |          |
	|         | -p download-only-844459        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:01:06
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:01:06.869256   73466 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:01:06.869503   73466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:06.869513   73466 out.go:304] Setting ErrFile to fd 2...
	I0213 23:01:06.869517   73466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:06.869709   73466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	W0213 23:01:06.869844   73466 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18169-66678/.minikube/config/config.json: open /home/jenkins/minikube-integration/18169-66678/.minikube/config/config.json: no such file or directory
	I0213 23:01:06.870464   73466 out.go:298] Setting JSON to true
	I0213 23:01:06.871322   73466 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6214,"bootTime":1707859053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:01:06.871386   73466 start.go:138] virtualization: kvm guest
	I0213 23:01:06.874026   73466 out.go:97] [download-only-844459] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:01:06.875766   73466 out.go:169] MINIKUBE_LOCATION=18169
	W0213 23:01:06.874193   73466 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball: no such file or directory
	I0213 23:01:06.874236   73466 notify.go:220] Checking for updates...
	I0213 23:01:06.878676   73466 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:01:06.880143   73466 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:01:06.881551   73466 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:01:06.882954   73466 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 23:01:06.885746   73466 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 23:01:06.886266   73466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:01:06.907591   73466 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:01:06.907696   73466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:07.286810   73466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-13 23:01:07.277748276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:07.286914   73466 docker.go:295] overlay module found
	I0213 23:01:07.288656   73466 out.go:97] Using the docker driver based on user configuration
	I0213 23:01:07.288686   73466 start.go:298] selected driver: docker
	I0213 23:01:07.288696   73466 start.go:902] validating driver "docker" against <nil>
	I0213 23:01:07.288852   73466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:07.338276   73466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-13 23:01:07.329599439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:07.338431   73466 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 23:01:07.338955   73466 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0213 23:01:07.339104   73466 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 23:01:07.341147   73466 out.go:169] Using Docker driver with root privileges
	I0213 23:01:07.342715   73466 cni.go:84] Creating CNI manager for ""
	I0213 23:01:07.342745   73466 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:01:07.342758   73466 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0213 23:01:07.342787   73466 start_flags.go:321] config:
	{Name:download-only-844459 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-844459 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:01:07.344439   73466 out.go:97] Starting control plane node download-only-844459 in cluster download-only-844459
	I0213 23:01:07.344464   73466 cache.go:121] Beginning downloading kic base image for docker with crio
	I0213 23:01:07.345875   73466 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 23:01:07.345897   73466 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 23:01:07.346042   73466 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 23:01:07.361310   73466 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 23:01:07.361574   73466 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 23:01:07.361656   73466 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 23:01:07.377738   73466 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0213 23:01:07.377765   73466 cache.go:56] Caching tarball of preloaded images
	I0213 23:01:07.377899   73466 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 23:01:07.380073   73466 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0213 23:01:07.380091   73466 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:01:07.415345   73466 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0213 23:01:10.229810   73466 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 23:01:11.316983   73466 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:01:11.317079   73466 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-844459"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-844459
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-658548 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-658548 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.334686832s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-658548
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-658548: exit status 85 (77.456796ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-844459 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | -p download-only-844459        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| delete  | -p download-only-844459        | download-only-844459 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| start   | -o=json --download-only        | download-only-658548 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | -p download-only-658548        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:01:16
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:01:16.032915   73770 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:01:16.033022   73770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:16.033029   73770 out.go:304] Setting ErrFile to fd 2...
	I0213 23:01:16.033034   73770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:16.033236   73770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:01:16.033811   73770 out.go:298] Setting JSON to true
	I0213 23:01:16.034634   73770 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6223,"bootTime":1707859053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:01:16.034699   73770 start.go:138] virtualization: kvm guest
	I0213 23:01:16.037264   73770 out.go:97] [download-only-658548] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:01:16.038919   73770 out.go:169] MINIKUBE_LOCATION=18169
	I0213 23:01:16.037452   73770 notify.go:220] Checking for updates...
	I0213 23:01:16.042515   73770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:01:16.044247   73770 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:01:16.045827   73770 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:01:16.047300   73770 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 23:01:16.050192   73770 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 23:01:16.050466   73770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:01:16.074147   73770 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:01:16.074224   73770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:16.125566   73770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-13 23:01:16.116200021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:16.125658   73770 docker.go:295] overlay module found
	I0213 23:01:16.127544   73770 out.go:97] Using the docker driver based on user configuration
	I0213 23:01:16.127575   73770 start.go:298] selected driver: docker
	I0213 23:01:16.127581   73770 start.go:902] validating driver "docker" against <nil>
	I0213 23:01:16.127663   73770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:16.177207   73770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-02-13 23:01:16.168676352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:16.177367   73770 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 23:01:16.177814   73770 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0213 23:01:16.177950   73770 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 23:01:16.179896   73770 out.go:169] Using Docker driver with root privileges
	I0213 23:01:16.181178   73770 cni.go:84] Creating CNI manager for ""
	I0213 23:01:16.181205   73770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:01:16.181214   73770 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0213 23:01:16.181223   73770 start_flags.go:321] config:
	{Name:download-only-658548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-658548 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:01:16.182808   73770 out.go:97] Starting control plane node download-only-658548 in cluster download-only-658548
	I0213 23:01:16.182833   73770 cache.go:121] Beginning downloading kic base image for docker with crio
	I0213 23:01:16.184192   73770 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 23:01:16.184212   73770 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:01:16.184375   73770 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 23:01:16.199363   73770 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 23:01:16.199509   73770 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 23:01:16.199528   73770 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0213 23:01:16.199533   73770 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0213 23:01:16.199544   73770 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 23:01:16.209352   73770 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:01:16.209379   73770 cache.go:56] Caching tarball of preloaded images
	I0213 23:01:16.209526   73770 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:01:16.211501   73770 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0213 23:01:16.211524   73770 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:01:16.241964   73770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:01:20.681161   73770 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:01:20.681281   73770 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-658548"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-658548
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-940739 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-940739 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.102458642s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-940739
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-940739: exit status 85 (78.374353ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-844459 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | -p download-only-844459           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| delete  | -p download-only-844459           | download-only-844459 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| start   | -o=json --download-only           | download-only-658548 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | -p download-only-658548           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| delete  | -p download-only-658548           | download-only-658548 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	| start   | -o=json --download-only           | download-only-940739 | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | -p download-only-940739           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:01:22
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:01:22.789312   74058 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:01:22.789460   74058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:22.789473   74058 out.go:304] Setting ErrFile to fd 2...
	I0213 23:01:22.789480   74058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:22.789728   74058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:01:22.790345   74058 out.go:298] Setting JSON to true
	I0213 23:01:22.791229   74058 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6230,"bootTime":1707859053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:01:22.791291   74058 start.go:138] virtualization: kvm guest
	I0213 23:01:22.793682   74058 out.go:97] [download-only-940739] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:01:22.795251   74058 out.go:169] MINIKUBE_LOCATION=18169
	I0213 23:01:22.793884   74058 notify.go:220] Checking for updates...
	I0213 23:01:22.798277   74058 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:01:22.799629   74058 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:01:22.801017   74058 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:01:22.802437   74058 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 23:01:22.804968   74058 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 23:01:22.805202   74058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:01:22.825681   74058 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:01:22.825850   74058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:22.876468   74058 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-13 23:01:22.86659585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:22.876570   74058 docker.go:295] overlay module found
	I0213 23:01:22.878474   74058 out.go:97] Using the docker driver based on user configuration
	I0213 23:01:22.878495   74058 start.go:298] selected driver: docker
	I0213 23:01:22.878500   74058 start.go:902] validating driver "docker" against <nil>
	I0213 23:01:22.878582   74058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:01:22.927594   74058 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:50 SystemTime:2024-02-13 23:01:22.918963832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:01:22.927765   74058 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 23:01:22.928245   74058 start_flags.go:392] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0213 23:01:22.928417   74058 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 23:01:22.930387   74058 out.go:169] Using Docker driver with root privileges
	I0213 23:01:22.931821   74058 cni.go:84] Creating CNI manager for ""
	I0213 23:01:22.931840   74058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0213 23:01:22.931851   74058 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0213 23:01:22.931860   74058 start_flags.go:321] config:
	{Name:download-only-940739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-940739 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:01:22.933456   74058 out.go:97] Starting control plane node download-only-940739 in cluster download-only-940739
	I0213 23:01:22.933474   74058 cache.go:121] Beginning downloading kic base image for docker with crio
	I0213 23:01:22.934834   74058 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0213 23:01:22.934856   74058 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:01:22.934958   74058 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0213 23:01:22.949933   74058 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0213 23:01:22.950034   74058 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0213 23:01:22.950057   74058 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0213 23:01:22.950067   74058 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0213 23:01:22.950080   74058 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0213 23:01:22.992777   74058 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0213 23:01:22.992807   74058 cache.go:56] Caching tarball of preloaded images
	I0213 23:01:22.992949   74058 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:01:22.994809   74058 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0213 23:01:22.994823   74058 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:01:23.030273   74058 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0213 23:01:26.256518   74058 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:01:26.256617   74058 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18169-66678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0213 23:01:27.056830   74058 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0213 23:01:27.057173   74058 profile.go:148] Saving config to /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/download-only-940739/config.json ...
	I0213 23:01:27.057210   74058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/download-only-940739/config.json: {Name:mkfe4e950cb94d3b35f851e4703e82ca78f0ef1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:01:27.057406   74058 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:01:27.057540   74058 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18169-66678/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-940739"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-940739
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.29s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-574132 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-574132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-574132
--- PASS: TestDownloadOnlyKic (1.29s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-182974 --alsologtostderr --binary-mirror http://127.0.0.1:45437 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-182974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-182974
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (88.52s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-436704 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-436704 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m24.26020618s)
helpers_test.go:175: Cleaning up "offline-crio-436704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-436704
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-436704: (4.264200842s)
--- PASS: TestOffline (88.52s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-913502
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-913502: exit status 85 (65.66189ms)

                                                
                                                
-- stdout --
	* Profile "addons-913502" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-913502"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-913502
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-913502: exit status 85 (66.936367ms)

                                                
                                                
-- stdout --
	* Profile "addons-913502" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-913502"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (155.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-913502 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-913502 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.252009049s)
--- PASS: TestAddons/Setup (155.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 16.648972ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zd97h" [4c64ca96-e524-479f-b3b1-8e37e19bf37e] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.039834366s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6fcvz" [fd7e7f0f-51ba-46ce-8a59-f33819b6a633] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004629789s
addons_test.go:340: (dbg) Run:  kubectl --context addons-913502 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-913502 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-913502 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.375695982s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 ip
2024/02/13 23:04:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-913502 addons disable registry --alsologtostderr -v=1: (1.239535851s)
--- PASS: TestAddons/parallel/Registry (15.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2vps6" [5ba29713-9b9a-4e3b-ac2b-f213297fe0cb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004324128s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-913502
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-913502: (5.797300626s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.295737ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-jv886" [0721b3c3-1074-430f-8fcf-1a0a987218e0] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.073017617s
addons_test.go:415: (dbg) Run:  kubectl --context addons-913502 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-913502 addons disable metrics-server --alsologtostderr -v=1: (1.336878s)
--- PASS: TestAddons/parallel/MetricsServer (7.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 16.604958ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-hd46l" [fe48a6c2-5ee5-4f2e-afc7-bdf16742cbfe] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.039315955s
addons_test.go:473: (dbg) Run:  kubectl --context addons-913502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-913502 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.393562435s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 17.679519ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-913502 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-913502 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [51e169ba-9c2a-45e2-8964-eb55fb86c3b0] Pending
helpers_test.go:344: "task-pv-pod" [51e169ba-9c2a-45e2-8964-eb55fb86c3b0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [51e169ba-9c2a-45e2-8964-eb55fb86c3b0] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003710785s
addons_test.go:584: (dbg) Run:  kubectl --context addons-913502 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-913502 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-913502 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-913502 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-913502 delete pod task-pv-pod: (1.013035356s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-913502 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-913502 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-913502 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e7fa816e-f574-46e5-950d-da53aaf414e1] Pending
helpers_test.go:344: "task-pv-pod-restore" [e7fa816e-f574-46e5-950d-da53aaf414e1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e7fa816e-f574-46e5-950d-da53aaf414e1] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003919891s
addons_test.go:626: (dbg) Run:  kubectl --context addons-913502 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-913502 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-913502 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-913502 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.53572927s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-913502 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-913502 --alsologtostderr -v=1: (1.386243951s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-gfh4d" [87bf28fc-f8b9-41fe-8556-343fc76aa56b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-gfh4d" [87bf28fc-f8b9-41fe-8556-343fc76aa56b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.014642439s
--- PASS: TestAddons/parallel/Headlamp (12.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-srg7k" [ea252ad6-9fe6-4fc1-9247-0c58d3d0a52b] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004151754s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-913502
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-913502 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-913502 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-913502 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8096253e-4bb5-40b1-8a71-daba0fcdcd0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8096253e-4bb5-40b1-8a71-daba0fcdcd0e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8096253e-4bb5-40b1-8a71-daba0fcdcd0e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003442247s
addons_test.go:891: (dbg) Run:  kubectl --context addons-913502 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 ssh "cat /opt/local-path-provisioner/pvc-d0c9bd29-9bf8-4b15-8147-542eee087336_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-913502 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-913502 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-913502 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nwpfp" [61a6a604-82d9-4f32-9be6-9b58ec3b2930] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004458223s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-913502
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-rg866" [9a7e60cc-7aeb-4319-9a1b-f5669d24b9ac] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003633761s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-913502 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-913502 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-913502
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-913502: (11.816994312s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-913502
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-913502
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-913502
--- PASS: TestAddons/StoppedEnableDisable (12.10s)

                                                
                                    
x
+
TestCertOptions (27.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-591297 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-591297 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.065765096s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-591297 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-591297 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-591297 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-591297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-591297
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-591297: (1.926487933s)
--- PASS: TestCertOptions (27.65s)

                                                
                                    
x
+
TestCertExpiration (241.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-403224 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-403224 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.6446354s)
E0213 23:34:09.007985   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:34:12.289551   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-403224 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-403224 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (33.066969118s)
helpers_test.go:175: Cleaning up "cert-expiration-403224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-403224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-403224: (2.023120308s)
--- PASS: TestCertExpiration (241.74s)

                                                
                                    
x
+
TestForceSystemdFlag (30.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-521681 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-521681 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.304701177s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-521681 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-521681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-521681
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-521681: (2.528409677s)
--- PASS: TestForceSystemdFlag (30.15s)

                                                
                                    
x
+
TestForceSystemdEnv (28.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-085906 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-085906 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.421482634s)
helpers_test.go:175: Cleaning up "force-systemd-env-085906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-085906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-085906: (2.424656811s)
--- PASS: TestForceSystemdEnv (28.85s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.33s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.33s)

                                                
                                    
x
+
TestErrorSpam/setup (23.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-187186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-187186 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-187186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-187186 --driver=docker  --container-runtime=crio: (23.727053941s)
--- PASS: TestErrorSpam/setup (23.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 stop: (1.181015929s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-187186 --log_dir /tmp/nospam-187186 stop
--- PASS: TestErrorSpam/stop (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18169-66678/.minikube/files/etc/test/nested/copy/73453/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-879196 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0213 23:09:09.007691   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:09.013752   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:09.024050   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:09.044283   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:09.084619   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:09.164918   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:09.325299   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:09.645922   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:10.286849   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:11.567412   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:14.129185   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-879196 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.79504944s)
--- PASS: TestFunctional/serial/StartWithProxy (68.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-879196 --alsologtostderr -v=8
E0213 23:09:19.249419   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:29.490538   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:09:49.971225   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-879196 --alsologtostderr -v=8: (36.364010546s)
functional_test.go:659: soft start took 36.366293188s for "functional-879196" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-879196 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 cache add registry.k8s.io/pause:3.1: (1.034563772s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-879196 /tmp/TestFunctionalserialCacheCmdcacheadd_local734422159/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cache add minikube-local-cache-test:functional-879196
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cache delete minikube-local-cache-test:functional-879196
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-879196
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.195761ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 kubectl -- --context functional-879196 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-879196 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-879196 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0213 23:10:30.932014   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-879196 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.667014122s)
functional_test.go:757: restart took 31.667191611s for "functional-879196" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-879196 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 logs: (1.368230551s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 logs --file /tmp/TestFunctionalserialLogsFileCmd573493387/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 logs --file /tmp/TestFunctionalserialLogsFileCmd573493387/001/logs.txt: (1.403522323s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-879196 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-879196
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-879196: exit status 115 (339.061401ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30600 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-879196 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-879196 delete -f testdata/invalidsvc.yaml: (1.109919767s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 config get cpus: exit status 14 (86.50157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 config get cpus: exit status 14 (82.929321ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-879196 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-879196 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 111814: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-879196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-879196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.983474ms)

                                                
                                                
-- stdout --
	* [functional-879196] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:11:03.439597  110482 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:11:03.439727  110482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:11:03.439740  110482 out.go:304] Setting ErrFile to fd 2...
	I0213 23:11:03.439747  110482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:11:03.439932  110482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:11:03.440533  110482 out.go:298] Setting JSON to false
	I0213 23:11:03.441522  110482 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6811,"bootTime":1707859053,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:11:03.441585  110482 start.go:138] virtualization: kvm guest
	I0213 23:11:03.445320  110482 out.go:177] * [functional-879196] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:11:03.447033  110482 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 23:11:03.447110  110482 notify.go:220] Checking for updates...
	I0213 23:11:03.448753  110482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:11:03.450482  110482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:11:03.451941  110482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:11:03.453623  110482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:11:03.455195  110482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:11:03.457169  110482 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:11:03.457695  110482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:11:03.489592  110482 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:11:03.489727  110482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:11:03.559025  110482 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:58 SystemTime:2024-02-13 23:11:03.545958455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:11:03.559142  110482 docker.go:295] overlay module found
	I0213 23:11:03.561454  110482 out.go:177] * Using the docker driver based on existing profile
	I0213 23:11:03.563088  110482 start.go:298] selected driver: docker
	I0213 23:11:03.563120  110482 start.go:902] validating driver "docker" against &{Name:functional-879196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-879196 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:11:03.563283  110482 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:11:03.566212  110482 out.go:177] 
	W0213 23:11:03.568031  110482 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0213 23:11:03.569837  110482 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-879196 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-879196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-879196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (186.317847ms)

                                                
                                                
-- stdout --
	* [functional-879196] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:11:03.952696  110840 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:11:03.952984  110840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:11:03.952994  110840 out.go:304] Setting ErrFile to fd 2...
	I0213 23:11:03.952999  110840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:11:03.953319  110840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:11:03.953893  110840 out.go:298] Setting JSON to false
	I0213 23:11:03.954889  110840 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6811,"bootTime":1707859053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:11:03.954956  110840 start.go:138] virtualization: kvm guest
	I0213 23:11:03.957679  110840 out.go:177] * [functional-879196] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0213 23:11:03.959445  110840 notify.go:220] Checking for updates...
	I0213 23:11:03.959483  110840 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 23:11:03.961236  110840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:11:03.963093  110840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:11:03.964677  110840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:11:03.966323  110840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:11:03.967960  110840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:11:03.969818  110840 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:11:03.970277  110840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:11:03.995253  110840 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:11:03.995381  110840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:11:04.052493  110840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:58 SystemTime:2024-02-13 23:11:04.042465309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:11:04.052595  110840 docker.go:295] overlay module found
	I0213 23:11:04.055902  110840 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0213 23:11:04.057639  110840 start.go:298] selected driver: docker
	I0213 23:11:04.057664  110840 start.go:902] validating driver "docker" against &{Name:functional-879196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-879196 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:11:04.057791  110840 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:11:04.060140  110840 out.go:177] 
	W0213 23:11:04.061818  110840 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0213 23:11:04.062982  110840 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-879196 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-879196 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-h7lsj" [8df6cb14-60f5-4efa-b670-6fdbc57fca30] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-h7lsj" [8df6cb14-60f5-4efa-b670-6fdbc57fca30] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00320901s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31523
functional_test.go:1671: http://192.168.49.2:31523: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-h7lsj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31523
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [91f027cb-18f8-42e0-8e74-7c7c4f4635d7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005309988s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-879196 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-879196 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-879196 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-879196 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8aa96506-5ffa-42fc-b2e0-29c4673aa771] Pending
helpers_test.go:344: "sp-pod" [8aa96506-5ffa-42fc-b2e0-29c4673aa771] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8aa96506-5ffa-42fc-b2e0-29c4673aa771] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003591662s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-879196 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-879196 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-879196 delete -f testdata/storage-provisioner/pod.yaml: (1.091139902s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-879196 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4c1b09dc-f739-448c-b6a4-96f55682297e] Pending
helpers_test.go:344: "sp-pod" [4c1b09dc-f739-448c-b6a4-96f55682297e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4c1b09dc-f739-448c-b6a4-96f55682297e] Running
2024/02/13 23:11:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003952226s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-879196 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh -n functional-879196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cp functional-879196:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3331281764/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh -n functional-879196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh -n functional-879196 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-879196 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-lw8vf" [93044752-87f6-4a36-906c-96dffe557486] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-lw8vf" [93044752-87f6-4a36-906c-96dffe557486] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.032874608s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-879196 exec mysql-859648c796-lw8vf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-879196 exec mysql-859648c796-lw8vf -- mysql -ppassword -e "show databases;": exit status 1 (419.018529ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-879196 exec mysql-859648c796-lw8vf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-879196 exec mysql-859648c796-lw8vf -- mysql -ppassword -e "show databases;": exit status 1 (331.792423ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-879196 exec mysql-859648c796-lw8vf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/73453/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo cat /etc/test/nested/copy/73453/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/73453.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo cat /etc/ssl/certs/73453.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/73453.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo cat /usr/share/ca-certificates/73453.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/734532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo cat /etc/ssl/certs/734532.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/734532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo cat /usr/share/ca-certificates/734532.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-879196 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh "sudo systemctl is-active docker": exit status 1 (303.17049ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh "sudo systemctl is-active containerd": exit status 1 (277.532603ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-879196 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-879196 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-wgfmd" [0bf90551-5005-4eb9-b3b6-26a403fe703d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-wgfmd" [0bf90551-5005-4eb9-b3b6-26a403fe703d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.04054682s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-879196 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-879196
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-879196 image ls --format short --alsologtostderr:
I0213 23:11:13.031653  112548 out.go:291] Setting OutFile to fd 1 ...
I0213 23:11:13.031816  112548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:13.031826  112548 out.go:304] Setting ErrFile to fd 2...
I0213 23:11:13.031831  112548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:13.032048  112548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
I0213 23:11:13.032821  112548 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:13.032981  112548 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:13.033500  112548 cli_runner.go:164] Run: docker container inspect functional-879196 --format={{.State.Status}}
I0213 23:11:13.050612  112548 ssh_runner.go:195] Run: systemctl --version
I0213 23:11:13.050678  112548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-879196
I0213 23:11:13.068127  112548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/functional-879196/id_rsa Username:docker}
I0213 23:11:13.164017  112548 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-879196 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 247f7abff9f70 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/library/nginx                 | alpine             | 2b70e4aaac6b5 | 44.4MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-879196  | ffd4cfbbe753e | 34.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-879196 image ls --format table --alsologtostderr:
I0213 23:11:14.347275  112864 out.go:291] Setting OutFile to fd 1 ...
I0213 23:11:14.347425  112864 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:14.347442  112864 out.go:304] Setting ErrFile to fd 2...
I0213 23:11:14.347450  112864 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:14.347686  112864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
I0213 23:11:14.348367  112864 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:14.348491  112864 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:14.348933  112864 cli_runner.go:164] Run: docker container inspect functional-879196 --format={{.State.Status}}
I0213 23:11:14.366476  112864 ssh_runner.go:195] Run: systemctl --version
I0213 23:11:14.366540  112864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-879196
I0213 23:11:14.382875  112864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/functional-879196/id_rsa Username:docker}
I0213 23:11:14.473126  112864 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-879196 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05","repoDigests":["docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938","docker.io/library/nginx@sha256:b41c95c4080d503eac2e455a47280079c5905c6281a1a5ee8fe75b52a92b35a0"],"repoTags":["docker.io/library/nginx:latest"],"size":"190871348"},{"i
d":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb
94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-879196"],"size":"34114467"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06"
,"repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748","repoDigests":["docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027","docker.io/library/nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44408171"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854a
d5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"
da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-879196 image ls --format json --alsologtostderr:
I0213 23:11:14.039488  112764 out.go:291] Setting OutFile to fd 1 ...
I0213 23:11:14.039657  112764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:14.039667  112764 out.go:304] Setting ErrFile to fd 2...
I0213 23:11:14.039675  112764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:14.039881  112764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
I0213 23:11:14.040542  112764 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:14.040680  112764 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:14.041108  112764 cli_runner.go:164] Run: docker container inspect functional-879196 --format={{.State.Status}}
I0213 23:11:14.057785  112764 ssh_runner.go:195] Run: systemctl --version
I0213 23:11:14.057854  112764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-879196
I0213 23:11:14.077066  112764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/functional-879196/id_rsa Username:docker}
I0213 23:11:14.208977  112764 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-879196 image ls --format yaml --alsologtostderr:
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-879196
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05
repoDigests:
- docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938
- docker.io/library/nginx@sha256:b41c95c4080d503eac2e455a47280079c5905c6281a1a5ee8fe75b52a92b35a0
repoTags:
- docker.io/library/nginx:latest
size: "190871348"
- id: 2b70e4aaac6b5370bf3a556f5e13156692351696dd5d7c5530d117aa21772748
repoDigests:
- docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027
- docker.io/library/nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076
repoTags:
- docker.io/library/nginx:alpine
size: "44408171"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-879196 image ls --format yaml --alsologtostderr:
I0213 23:11:13.265492  112593 out.go:291] Setting OutFile to fd 1 ...
I0213 23:11:13.265789  112593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:13.265802  112593 out.go:304] Setting ErrFile to fd 2...
I0213 23:11:13.265809  112593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:13.266013  112593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
I0213 23:11:13.266664  112593 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:13.266782  112593 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:13.267227  112593 cli_runner.go:164] Run: docker container inspect functional-879196 --format={{.State.Status}}
I0213 23:11:13.285717  112593 ssh_runner.go:195] Run: systemctl --version
I0213 23:11:13.285774  112593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-879196
I0213 23:11:13.302587  112593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/functional-879196/id_rsa Username:docker}
I0213 23:11:13.412976  112593 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh pgrep buildkitd: exit status 1 (269.754078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image build -t localhost/my-image:functional-879196 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 image build -t localhost/my-image:functional-879196 testdata/build --alsologtostderr: (2.95378012s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-879196 image build -t localhost/my-image:functional-879196 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c5cc1aed93b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-879196
--> 830c20d6920
Successfully tagged localhost/my-image:functional-879196
830c20d6920c4b47c10aea2ea78752cfa76399768c3065e6cd9f071bb8df328e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-879196 image build -t localhost/my-image:functional-879196 testdata/build --alsologtostderr:
I0213 23:11:13.808641  112717 out.go:291] Setting OutFile to fd 1 ...
I0213 23:11:13.808980  112717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:13.808992  112717 out.go:304] Setting ErrFile to fd 2...
I0213 23:11:13.809000  112717 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 23:11:13.809310  112717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
I0213 23:11:13.810170  112717 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:13.810789  112717 config.go:182] Loaded profile config "functional-879196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 23:11:13.811237  112717 cli_runner.go:164] Run: docker container inspect functional-879196 --format={{.State.Status}}
I0213 23:11:13.831163  112717 ssh_runner.go:195] Run: systemctl --version
I0213 23:11:13.831231  112717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-879196
I0213 23:11:13.852970  112717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/functional-879196/id_rsa Username:docker}
I0213 23:11:13.964984  112717 build_images.go:151] Building image from path: /tmp/build.1334773709.tar
I0213 23:11:13.965054  112717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0213 23:11:13.973893  112717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1334773709.tar
I0213 23:11:13.977184  112717 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1334773709.tar: stat -c "%s %y" /var/lib/minikube/build/build.1334773709.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1334773709.tar': No such file or directory
I0213 23:11:13.977216  112717 ssh_runner.go:362] scp /tmp/build.1334773709.tar --> /var/lib/minikube/build/build.1334773709.tar (3072 bytes)
I0213 23:11:14.001500  112717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1334773709
I0213 23:11:14.010215  112717 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1334773709 -xf /var/lib/minikube/build/build.1334773709.tar
I0213 23:11:14.019346  112717 crio.go:297] Building image: /var/lib/minikube/build/build.1334773709
I0213 23:11:14.019408  112717 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-879196 /var/lib/minikube/build/build.1334773709 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0213 23:11:16.676824  112717 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-879196 /var/lib/minikube/build/build.1334773709 --cgroup-manager=cgroupfs: (2.657392561s)
I0213 23:11:16.676891  112717 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1334773709
I0213 23:11:16.685021  112717 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1334773709.tar
I0213 23:11:16.692890  112717 build_images.go:207] Built localhost/my-image:functional-879196 from /tmp/build.1334773709.tar
I0213 23:11:16.692922  112717 build_images.go:123] succeeded building to: functional-879196
I0213 23:11:16.692926  112717 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.101724992s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-879196
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image load --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 image load --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr: (4.292651218s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "360.780174ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "78.894339ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdany-port236385742/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707865842747074481" to /tmp/TestFunctionalparallelMountCmdany-port236385742/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707865842747074481" to /tmp/TestFunctionalparallelMountCmdany-port236385742/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707865842747074481" to /tmp/TestFunctionalparallelMountCmdany-port236385742/001/test-1707865842747074481
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (325.54792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 13 23:10 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 13 23:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 13 23:10 test-1707865842747074481
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh cat /mount-9p/test-1707865842747074481
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-879196 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cf4a4663-298c-45bb-baba-80ac70e8a797] Pending
helpers_test.go:344: "busybox-mount" [cf4a4663-298c-45bb-baba-80ac70e8a797] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cf4a4663-298c-45bb-baba-80ac70e8a797] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cf4a4663-298c-45bb-baba-80ac70e8a797] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.004235049s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-879196 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdany-port236385742/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "308.897043ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "62.767775ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image load --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 image load --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr: (8.136777654s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 service list -o json
functional_test.go:1490: Took "406.616521ms" to run "out/minikube-linux-amd64 -p functional-879196 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30656
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30656
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdspecific-port239178663/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.942159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdspecific-port239178663/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh "sudo umount -f /mount-9p": exit status 1 (589.859936ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-879196 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdspecific-port239178663/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2935888055/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2935888055/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2935888055/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T" /mount1: exit status 1 (479.076157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-879196 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2935888055/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2935888055/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-879196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2935888055/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-879196 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-879196 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-879196 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-879196 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 110251: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-879196 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-879196 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [226b38a2-23e8-4bb4-82d8-b2aaa8ed8a6f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [226b38a2-23e8-4bb4-82d8-b2aaa8ed8a6f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005000532s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image save gcr.io/google-containers/addon-resizer:functional-879196 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image rm gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.02357741s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-879196
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-879196 image save --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-879196 image save --daemon gcr.io/google-containers/addon-resizer:functional-879196 --alsologtostderr: (1.066562798s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-879196
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-879196 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.105.198 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-879196 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-879196
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-879196
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-879196
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (69.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-660356 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0213 23:11:52.852405   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-660356 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m9.833144817s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (69.83s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.32s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-660356 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-660356 addons enable ingress --alsologtostderr -v=5: (11.322272105s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-660356 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-920248 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0213 23:15:59.705898   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:16:20.186893   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:17:01.147772   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-920248 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m8.755408235s)
--- PASS: TestJSONOutput/start/Command (68.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-920248 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-920248 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-920248 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-920248 --output=json --user=testUser: (5.743980847s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-960639 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-960639 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.944478ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bf548a2b-be4a-43ab-bee7-dd532b84c83d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-960639] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1371f3f0-6bdf-496a-8ec1-3c48b4fb220f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18169"}}
	{"specversion":"1.0","id":"d858040c-e013-4377-ae92-ea36b4369361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b84347ac-e370-4a4c-b137-ff6f2d5f1383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig"}}
	{"specversion":"1.0","id":"598db047-e513-4b7c-ad11-ef181456d3ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube"}}
	{"specversion":"1.0","id":"b20d4ce2-687a-4bb9-839e-2bc4ff60bd0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"27e3d753-a86e-4456-a00b-1c21a6999472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9a8ccd33-1a1c-4862-bf5c-643fef9453e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-960639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-960639
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-329403 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-329403 --network=: (32.080112742s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-329403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-329403
E0213 23:17:49.246369   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:49.251658   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:49.261890   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:49.282167   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:49.322520   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:49.402855   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:49.563292   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:49.883949   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-329403: (1.955872975s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-278304 --network=bridge
E0213 23:17:50.525067   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:51.805574   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:54.366334   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:17:59.487053   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
E0213 23:18:09.727298   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-278304 --network=bridge: (25.389554841s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-278304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-278304
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-278304: (1.839590044s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.25s)

                                                
                                    
x
+
TestKicExistingNetwork (26.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-266338 --network=existing-network
E0213 23:18:23.068449   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:18:30.207906   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-266338 --network=existing-network: (24.104573904s)
helpers_test.go:175: Cleaning up "existing-network-266338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-266338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-266338: (1.846543415s)
--- PASS: TestKicExistingNetwork (26.08s)

                                                
                                    
x
+
TestKicCustomSubnet (24.91s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-845083 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-845083 --subnet=192.168.60.0/24: (22.84409692s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-845083 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-845083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-845083
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-845083: (2.046914329s)
--- PASS: TestKicCustomSubnet (24.91s)

                                                
                                    
x
+
TestKicStaticIP (26.64s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-545761 --static-ip=192.168.200.200
E0213 23:19:09.007742   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:19:11.168270   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-545761 --static-ip=192.168.200.200: (24.417363577s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-545761 ip
helpers_test.go:175: Cleaning up "static-ip-545761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-545761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-545761: (2.086540402s)
--- PASS: TestKicStaticIP (26.64s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-123776 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-123776 --driver=docker  --container-runtime=crio: (24.22382108s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-126320 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-126320 --driver=docker  --container-runtime=crio: (23.847229523s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-123776
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-126320
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-126320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-126320
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-126320: (1.88130489s)
helpers_test.go:175: Cleaning up "first-123776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-123776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-123776: (1.861956075s)
--- PASS: TestMinikubeProfile (52.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-512731 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-512731 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.234286323s)
E0213 23:20:33.088566   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (5.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-512731 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-532191 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-532191 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.490750907s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-532191 ssh -- ls /minikube-host
E0213 23:20:39.224642   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-512731 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-512731 --alsologtostderr -v=5: (1.598712659s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-532191 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-532191
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-532191: (1.182377023s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-532191
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-532191: (5.946420691s)
--- PASS: TestMountStart/serial/RestartStopped (6.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-532191 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963978 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0213 23:21:06.909681   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963978 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m26.038679131s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-963978 -- rollout status deployment/busybox: (3.127519694s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-bnz4l -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-fhwxr -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-bnz4l -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-fhwxr -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-bnz4l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-fhwxr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-bnz4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-bnz4l -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-fhwxr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963978 -- exec busybox-5b5d89c9d6-fhwxr -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-963978 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-963978 -v 3 --alsologtostderr: (16.007885196s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-963978 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp testdata/cp-test.txt multinode-963978:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2015836961/001/cp-test_multinode-963978.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978:/home/docker/cp-test.txt multinode-963978-m02:/home/docker/cp-test_multinode-963978_multinode-963978-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m02 "sudo cat /home/docker/cp-test_multinode-963978_multinode-963978-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978:/home/docker/cp-test.txt multinode-963978-m03:/home/docker/cp-test_multinode-963978_multinode-963978-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m03 "sudo cat /home/docker/cp-test_multinode-963978_multinode-963978-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp testdata/cp-test.txt multinode-963978-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2015836961/001/cp-test_multinode-963978-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978-m02:/home/docker/cp-test.txt multinode-963978:/home/docker/cp-test_multinode-963978-m02_multinode-963978.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978 "sudo cat /home/docker/cp-test_multinode-963978-m02_multinode-963978.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978-m02:/home/docker/cp-test.txt multinode-963978-m03:/home/docker/cp-test_multinode-963978-m02_multinode-963978-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m03 "sudo cat /home/docker/cp-test_multinode-963978-m02_multinode-963978-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp testdata/cp-test.txt multinode-963978-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2015836961/001/cp-test_multinode-963978-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978-m03:/home/docker/cp-test.txt multinode-963978:/home/docker/cp-test_multinode-963978-m03_multinode-963978.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978 "sudo cat /home/docker/cp-test_multinode-963978-m03_multinode-963978.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 cp multinode-963978-m03:/home/docker/cp-test.txt multinode-963978-m02:/home/docker/cp-test_multinode-963978-m03_multinode-963978-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m03 "sudo cat /home/docker/cp-test.txt"
E0213 23:22:49.246123   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 ssh -n multinode-963978-m02 "sudo cat /home/docker/cp-test_multinode-963978-m03_multinode-963978-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-963978 node stop m03: (1.180404887s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963978 status: exit status 7 (471.640379ms)

                                                
                                                
-- stdout --
	multinode-963978
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-963978-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-963978-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr: exit status 7 (476.540796ms)

                                                
                                                
-- stdout --
	multinode-963978
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-963978-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-963978-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:22:51.407196  171690 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:22:51.407483  171690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:22:51.407495  171690 out.go:304] Setting ErrFile to fd 2...
	I0213 23:22:51.407503  171690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:22:51.407716  171690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:22:51.407963  171690 out.go:298] Setting JSON to false
	I0213 23:22:51.408013  171690 mustload.go:65] Loading cluster: multinode-963978
	I0213 23:22:51.408062  171690 notify.go:220] Checking for updates...
	I0213 23:22:51.408492  171690 config.go:182] Loaded profile config "multinode-963978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:22:51.408512  171690 status.go:255] checking status of multinode-963978 ...
	I0213 23:22:51.408972  171690 cli_runner.go:164] Run: docker container inspect multinode-963978 --format={{.State.Status}}
	I0213 23:22:51.428930  171690 status.go:330] multinode-963978 host status = "Running" (err=<nil>)
	I0213 23:22:51.428955  171690 host.go:66] Checking if "multinode-963978" exists ...
	I0213 23:22:51.429215  171690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-963978
	I0213 23:22:51.446847  171690 host.go:66] Checking if "multinode-963978" exists ...
	I0213 23:22:51.447113  171690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 23:22:51.447151  171690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-963978
	I0213 23:22:51.462943  171690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/multinode-963978/id_rsa Username:docker}
	I0213 23:22:51.557559  171690 ssh_runner.go:195] Run: systemctl --version
	I0213 23:22:51.561662  171690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:22:51.572475  171690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:22:51.623384  171690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:67 SystemTime:2024-02-13 23:22:51.614313721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:22:51.623917  171690 kubeconfig.go:92] found "multinode-963978" server: "https://192.168.58.2:8443"
	I0213 23:22:51.623942  171690 api_server.go:166] Checking apiserver status ...
	I0213 23:22:51.623979  171690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:22:51.634491  171690 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	I0213 23:22:51.643658  171690 api_server.go:182] apiserver freezer: "10:freezer:/docker/2bb976d17d97362c40358d3403273330f8751749ef16941489e698717e89bde5/crio/crio-aba3be65913cf5b452a8836067d4045d3901907e8e9730a61f7349cf99807882"
	I0213 23:22:51.643735  171690 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2bb976d17d97362c40358d3403273330f8751749ef16941489e698717e89bde5/crio/crio-aba3be65913cf5b452a8836067d4045d3901907e8e9730a61f7349cf99807882/freezer.state
	I0213 23:22:51.651804  171690 api_server.go:204] freezer state: "THAWED"
	I0213 23:22:51.651842  171690 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0213 23:22:51.656169  171690 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0213 23:22:51.656199  171690 status.go:421] multinode-963978 apiserver status = Running (err=<nil>)
	I0213 23:22:51.656209  171690 status.go:257] multinode-963978 status: &{Name:multinode-963978 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 23:22:51.656231  171690 status.go:255] checking status of multinode-963978-m02 ...
	I0213 23:22:51.656565  171690 cli_runner.go:164] Run: docker container inspect multinode-963978-m02 --format={{.State.Status}}
	I0213 23:22:51.673120  171690 status.go:330] multinode-963978-m02 host status = "Running" (err=<nil>)
	I0213 23:22:51.673161  171690 host.go:66] Checking if "multinode-963978-m02" exists ...
	I0213 23:22:51.673403  171690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-963978-m02
	I0213 23:22:51.689522  171690 host.go:66] Checking if "multinode-963978-m02" exists ...
	I0213 23:22:51.689803  171690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 23:22:51.689837  171690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-963978-m02
	I0213 23:22:51.705552  171690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/18169-66678/.minikube/machines/multinode-963978-m02/id_rsa Username:docker}
	I0213 23:22:51.797169  171690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:22:51.807474  171690 status.go:257] multinode-963978-m02 status: &{Name:multinode-963978-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0213 23:22:51.807517  171690 status.go:255] checking status of multinode-963978-m03 ...
	I0213 23:22:51.807783  171690 cli_runner.go:164] Run: docker container inspect multinode-963978-m03 --format={{.State.Status}}
	I0213 23:22:51.824568  171690 status.go:330] multinode-963978-m03 host status = "Stopped" (err=<nil>)
	I0213 23:22:51.824593  171690 status.go:343] host is not running, skipping remaining checks
	I0213 23:22:51.824600  171690 status.go:257] multinode-963978-m03 status: &{Name:multinode-963978-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-963978 node start m03 --alsologtostderr: (10.868029877s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-963978
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-963978
E0213 23:23:16.928776   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-963978: (24.57597257s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963978 --wait=true -v=8 --alsologtostderr
E0213 23:24:09.007909   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963978 --wait=true -v=8 --alsologtostderr: (1m27.111919001s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-963978
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-963978 node delete m03: (4.078416835s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-963978 stop: (23.493695049s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963978 status: exit status 7 (97.979329ms)

                                                
                                                
-- stdout --
	multinode-963978
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-963978-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr: exit status 7 (95.159854ms)

                                                
                                                
-- stdout --
	multinode-963978
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-963978-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:25:23.526597  181819 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:25:23.526721  181819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:25:23.526731  181819 out.go:304] Setting ErrFile to fd 2...
	I0213 23:25:23.526736  181819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:25:23.526961  181819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:25:23.527167  181819 out.go:298] Setting JSON to false
	I0213 23:25:23.527211  181819 mustload.go:65] Loading cluster: multinode-963978
	I0213 23:25:23.527298  181819 notify.go:220] Checking for updates...
	I0213 23:25:23.527658  181819 config.go:182] Loaded profile config "multinode-963978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:25:23.527677  181819 status.go:255] checking status of multinode-963978 ...
	I0213 23:25:23.528193  181819 cli_runner.go:164] Run: docker container inspect multinode-963978 --format={{.State.Status}}
	I0213 23:25:23.546656  181819 status.go:330] multinode-963978 host status = "Stopped" (err=<nil>)
	I0213 23:25:23.546677  181819 status.go:343] host is not running, skipping remaining checks
	I0213 23:25:23.546682  181819 status.go:257] multinode-963978 status: &{Name:multinode-963978 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 23:25:23.546702  181819 status.go:255] checking status of multinode-963978-m02 ...
	I0213 23:25:23.546937  181819 cli_runner.go:164] Run: docker container inspect multinode-963978-m02 --format={{.State.Status}}
	I0213 23:25:23.564042  181819 status.go:330] multinode-963978-m02 host status = "Stopped" (err=<nil>)
	I0213 23:25:23.564067  181819 status.go:343] host is not running, skipping remaining checks
	I0213 23:25:23.564073  181819 status.go:257] multinode-963978-m02 status: &{Name:multinode-963978-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (73.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963978 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0213 23:25:32.053970   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
E0213 23:25:39.224702   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963978 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.011552557s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963978 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (73.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-963978
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963978-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-963978-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.702158ms)

                                                
                                                
-- stdout --
	* [multinode-963978-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-963978-m02' is duplicated with machine name 'multinode-963978-m02' in profile 'multinode-963978'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963978-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963978-m03 --driver=docker  --container-runtime=crio: (20.814808127s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-963978
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-963978: exit status 80 (274.488383ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-963978
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-963978-m03 already exists in multinode-963978-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-963978-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-963978-m03: (1.876793203s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.10s)

                                                
                                    
x
+
TestPreload (130.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-286562 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0213 23:27:49.246712   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-286562 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m14.768670156s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-286562 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-286562 image pull gcr.io/k8s-minikube/busybox: (1.780177134s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-286562
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-286562: (5.711933279s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-286562 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0213 23:29:09.008138   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-286562 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (45.634780271s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-286562 image list
helpers_test.go:175: Cleaning up "test-preload-286562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-286562
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-286562: (2.268542974s)
--- PASS: TestPreload (130.39s)

                                                
                                    
x
+
TestScheduledStopUnix (97.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-952442 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-952442 --memory=2048 --driver=docker  --container-runtime=crio: (21.422296713s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-952442 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-952442 -n scheduled-stop-952442
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-952442 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-952442 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-952442 -n scheduled-stop-952442
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-952442
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-952442 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0213 23:30:39.224127   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-952442
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-952442: exit status 7 (81.370778ms)

                                                
                                                
-- stdout --
	scheduled-stop-952442
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-952442 -n scheduled-stop-952442
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-952442 -n scheduled-stop-952442: exit status 7 (82.365388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-952442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-952442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-952442: (4.276611952s)
--- PASS: TestScheduledStopUnix (97.14s)

                                                
                                    
x
+
TestInsufficientStorage (12.98s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-820904 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-820904 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.59602068s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6707c01b-8834-416f-9cd2-0fe0854480b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-820904] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c05b4e6-6861-4ea3-a8b5-b891bd3f8b19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18169"}}
	{"specversion":"1.0","id":"e165f42d-6792-42e1-beb7-63db36e4b906","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f573026-d51d-4999-9933-1bd4921f7a97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig"}}
	{"specversion":"1.0","id":"9166b28c-72ba-4ad0-8a7b-c4d8463e76fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube"}}
	{"specversion":"1.0","id":"ee0802b3-8dc2-43ce-a3ba-8fb6a53a6b38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"daeee19e-6e68-476e-8068-94a6ea973c4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50783728-8229-4398-a53a-ed6abcf30357","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0e1c32b7-22d5-440a-8037-f2b59ab14875","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"220124fd-b43f-4c7f-a0d6-5efd11b84e2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c40961b-52b8-4a7f-acba-ea38316f6da7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"afef5d04-0ae3-4277-81ec-42c41c17624c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-820904 in cluster insufficient-storage-820904","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"974d9977-4e61-4616-b1dd-d53f1c11deed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e296e304-37a4-4ee4-9f5c-1d801471fbfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"07c21efb-fc3a-4af9-8479-7cb091b22546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-820904 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-820904 --output=json --layout=cluster: exit status 7 (272.01134ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-820904","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-820904","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:31:02.566182  202732 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-820904" does not appear in /home/jenkins/minikube-integration/18169-66678/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-820904 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-820904 --output=json --layout=cluster: exit status 7 (272.660431ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-820904","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-820904","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:31:02.839587  202824 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-820904" does not appear in /home/jenkins/minikube-integration/18169-66678/kubeconfig
	E0213 23:31:02.849284  202824 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/insufficient-storage-820904/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-820904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-820904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-820904: (1.840831825s)
--- PASS: TestInsufficientStorage (12.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.802730038 start -p running-upgrade-454185 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.802730038 start -p running-upgrade-454185 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.817881309s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-454185 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-454185 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.922100607s)
helpers_test.go:175: Cleaning up "running-upgrade-454185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-454185
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-454185: (6.068487111s)
--- PASS: TestRunningBinaryUpgrade (88.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (371.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.991178968s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-475933
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-475933: (1.196894248s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-475933 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-475933 status --format={{.Host}}: exit status 7 (77.284238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.703433266s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-475933 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (107.559836ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-475933] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-475933
	    minikube start -p kubernetes-upgrade-475933 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4759332 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-475933 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-475933 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.106566242s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-475933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-475933
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-475933: (2.990293848s)
--- PASS: TestKubernetesUpgrade (371.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (91.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2547984426 start -p missing-upgrade-824570 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2547984426 start -p missing-upgrade-824570 --memory=2200 --driver=docker  --container-runtime=crio: (23.547790775s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-824570
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-824570: (10.400086349s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-824570
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-824570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0213 23:35:39.224634   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-824570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.252609578s)
helpers_test.go:175: Cleaning up "missing-upgrade-824570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-824570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-824570: (4.676659585s)
--- PASS: TestMissingContainerUpgrade (91.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.142237317 start -p stopped-upgrade-504430 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0213 23:32:02.270459   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.142237317 start -p stopped-upgrade-504430 --memory=2200 --vm-driver=docker  --container-runtime=crio: (59.95548952s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.142237317 -p stopped-upgrade-504430 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.142237317 -p stopped-upgrade-504430 stop: (2.362328002s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-504430 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-504430 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.43075986s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.79s)

                                                
                                    
x
+
TestPause/serial/Start (69.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-476498 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-476498 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m9.630991616s)
--- PASS: TestPause/serial/Start (69.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-504430
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-504430: (1.194507468s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223197 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-223197 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (113.092128ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-223197] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223197 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223197 --driver=docker  --container-runtime=crio: (31.164687106s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-223197 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-125132 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-125132 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (184.090363ms)

                                                
                                                
-- stdout --
	* [false-125132] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18169
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:32:42.538523  224093 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:32:42.538677  224093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:32:42.538690  224093 out.go:304] Setting ErrFile to fd 2...
	I0213 23:32:42.538700  224093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:32:42.538968  224093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18169-66678/.minikube/bin
	I0213 23:32:42.540249  224093 out.go:298] Setting JSON to false
	I0213 23:32:42.541590  224093 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8110,"bootTime":1707859053,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:32:42.541663  224093 start.go:138] virtualization: kvm guest
	I0213 23:32:42.543945  224093 out.go:177] * [false-125132] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:32:42.545782  224093 out.go:177]   - MINIKUBE_LOCATION=18169
	I0213 23:32:42.545777  224093 notify.go:220] Checking for updates...
	I0213 23:32:42.547527  224093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:32:42.551327  224093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18169-66678/kubeconfig
	I0213 23:32:42.553338  224093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18169-66678/.minikube
	I0213 23:32:42.555099  224093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:32:42.556885  224093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:32:42.559023  224093 config.go:182] Loaded profile config "NoKubernetes-223197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:32:42.559165  224093 config.go:182] Loaded profile config "kubernetes-upgrade-475933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:32:42.559284  224093 config.go:182] Loaded profile config "pause-476498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:32:42.559406  224093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:32:42.593380  224093 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0213 23:32:42.593490  224093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0213 23:32:42.643620  224093 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:77 SystemTime:2024-02-13 23:32:42.634967783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0213 23:32:42.643730  224093 docker.go:295] overlay module found
	I0213 23:32:42.646054  224093 out.go:177] * Using the docker driver based on user configuration
	I0213 23:32:42.647464  224093 start.go:298] selected driver: docker
	I0213 23:32:42.647485  224093 start.go:902] validating driver "docker" against <nil>
	I0213 23:32:42.647496  224093 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:32:42.649777  224093 out.go:177] 
	W0213 23:32:42.651047  224093 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0213 23:32:42.652443  224093 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-125132 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-125132" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 13 Feb 2024 23:32:30 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-475933
contexts:
- context:
cluster: kubernetes-upgrade-475933
user: kubernetes-upgrade-475933
name: kubernetes-upgrade-475933
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-475933
user:
client-certificate: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/kubernetes-upgrade-475933/client.crt
client-key: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/kubernetes-upgrade-475933/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-125132

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-125132"

                                                
                                                
----------------------- debugLogs end: false-125132 [took: 3.331575593s] --------------------------------
helpers_test.go:175: Cleaning up "false-125132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-125132
--- PASS: TestNetworkPlugins/group/false (3.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223197 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223197 --no-kubernetes --driver=docker  --container-runtime=crio: (14.550322857s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-223197 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-223197 status -o json: exit status 2 (335.03117ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-223197","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-223197
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-223197: (1.949309638s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223197 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223197 --no-kubernetes --driver=docker  --container-runtime=crio: (7.21609658s)
--- PASS: TestNoKubernetes/serial/Start (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-223197 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-223197 "sudo systemctl is-active --quiet service kubelet": exit status 1 (299.13744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-223197
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-223197: (1.194877968s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-223197 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-223197 --driver=docker  --container-runtime=crio: (6.144132916s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-223197 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-223197 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.339468ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-476498 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-476498 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.524625743s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.55s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-476498 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-476498 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-476498 --output=json --layout=cluster: exit status 2 (307.58852ms)

                                                
                                                
-- stdout --
	{"Name":"pause-476498","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-476498","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-476498 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-476498 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.72s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-476498 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-476498 --alsologtostderr -v=5: (2.724554617s)
--- PASS: TestPause/serial/DeletePaused (2.72s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.96s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (17.906702002s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-476498
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-476498: exit status 1 (16.765905ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-476498: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (17.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (118.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-488781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-488781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m58.914079716s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (118.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-281187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-281187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (51.006478176s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-281187 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [474d6a5e-faec-4173-a2eb-3d02a8cc9b81] Pending
helpers_test.go:344: "busybox" [474d6a5e-faec-4173-a2eb-3d02a8cc9b81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [474d6a5e-faec-4173-a2eb-3d02a8cc9b81] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.008920922s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-281187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-488781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93030c60-1624-48a0-b878-860d8100f313] Pending
helpers_test.go:344: "busybox" [93030c60-1624-48a0-b878-860d8100f313] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [93030c60-1624-48a0-b878-860d8100f313] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003595248s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-488781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-281187 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-281187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-281187 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-281187 --alsologtostderr -v=3: (11.872140667s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-488781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-488781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-488781 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-488781 --alsologtostderr -v=3: (11.882365564s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-281187 -n no-preload-281187
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-281187 -n no-preload-281187: exit status 7 (79.003151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-281187 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-281187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-281187 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m39.592085626s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-281187 -n no-preload-281187
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488781 -n old-k8s-version-488781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488781 -n old-k8s-version-488781: exit status 7 (124.828168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-488781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (429.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-488781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-488781 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m9.274507795s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-488781 -n old-k8s-version-488781
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (429.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-530478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-530478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m15.148083149s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-290912 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0213 23:37:49.246584   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-290912 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m7.836746368s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-530478 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b102613f-fd37-43db-9f87-b8005dee7b3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b102613f-fd37-43db-9f87-b8005dee7b3e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004202454s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-530478 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-530478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-530478 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-530478 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-530478 --alsologtostderr -v=3: (11.838046058s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-290912 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dcacc197-faca-4033-85a8-3e7c49bbba75] Pending
helpers_test.go:344: "busybox" [dcacc197-faca-4033-85a8-3e7c49bbba75] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dcacc197-faca-4033-85a8-3e7c49bbba75] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003920606s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-290912 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-530478 -n embed-certs-530478
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-530478 -n embed-certs-530478: exit status 7 (83.073354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-530478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (339.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-530478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-530478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m39.392343463s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-530478 -n embed-certs-530478
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (339.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-290912 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-290912 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-290912 --alsologtostderr -v=3
E0213 23:39:09.007625   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-290912 --alsologtostderr -v=3: (11.855536047s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912: exit status 7 (84.799982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-290912 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-290912 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0213 23:40:39.224119   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
E0213 23:42:12.054513   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-290912 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m41.03269705s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qftcw" [406ad47a-8a0a-4b8a-a95f-9cecad07b339] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qftcw" [406ad47a-8a0a-4b8a-a95f-9cecad07b339] Running
E0213 23:42:49.246077   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004279721s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qftcw" [406ad47a-8a0a-4b8a-a95f-9cecad07b339] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004015791s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-281187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-281187 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-281187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-281187 -n no-preload-281187
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-281187 -n no-preload-281187: exit status 2 (342.240551ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-281187 -n no-preload-281187
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-281187 -n no-preload-281187: exit status 2 (326.986516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-281187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-281187 -n no-preload-281187
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-281187 -n no-preload-281187
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-890104 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-890104 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (35.585864887s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-890104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-890104 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-890104 --alsologtostderr -v=3: (11.840821887s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-890104 -n newest-cni-890104
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-890104 -n newest-cni-890104: exit status 7 (87.524517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-890104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-890104 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0213 23:44:09.007711   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/addons-913502/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-890104 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (26.197516075s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-890104 -n newest-cni-890104
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2hgk5" [1bfd1dd0-2e15-4028-bdd0-34255e8bc878] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003344227s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-890104 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-890104 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-890104 -n newest-cni-890104
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-890104 -n newest-cni-890104: exit status 2 (310.603823ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-890104 -n newest-cni-890104
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-890104 -n newest-cni-890104: exit status 2 (301.497914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-890104 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-890104 -n newest-cni-890104
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-890104 -n newest-cni-890104
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2hgk5" [1bfd1dd0-2e15-4028-bdd0-34255e8bc878] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003747927s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-488781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m14.117942214s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-488781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-488781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-488781 --alsologtostderr -v=1: (1.241267082s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488781 -n old-k8s-version-488781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488781 -n old-k8s-version-488781: exit status 2 (471.843326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-488781 -n old-k8s-version-488781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-488781 -n old-k8s-version-488781: exit status 2 (371.37447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-488781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-488781 -n old-k8s-version-488781
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-488781 -n old-k8s-version-488781
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4kbgn" [cbf2927f-fbc5-413e-ba7b-0a94e7c7c9bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4kbgn" [cbf2927f-fbc5-413e-ba7b-0a94e7c7c9bc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004875201s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m8.913394669s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4kbgn" [cbf2927f-fbc5-413e-ba7b-0a94e7c7c9bc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005403309s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-530478 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-530478 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-530478 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-530478 --alsologtostderr -v=1: (1.001437278s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-530478 -n embed-certs-530478
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-530478 -n embed-certs-530478: exit status 2 (368.18224ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-530478 -n embed-certs-530478
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-530478 -n embed-certs-530478: exit status 2 (359.51666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-530478 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-530478 -n embed-certs-530478
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-530478 -n embed-certs-530478
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zhpfq" [2b0e7bb8-aa43-4ffc-b807-38047ac917ea] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zhpfq" [2b0e7bb8-aa43-4ffc-b807-38047ac917ea] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.047913913s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.443246749s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zhpfq" [2b0e7bb8-aa43-4ffc-b807-38047ac917ea] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005069366s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-290912 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-290912 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-290912 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912: exit status 2 (331.32001ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912: exit status 2 (349.974578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-290912 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-290912 -n default-k8s-diff-port-290912
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)
E0213 23:46:45.386595   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/no-preload-281187/client.crt: no such file or directory
E0213 23:46:47.241170   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.246540   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.256885   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.277232   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.317590   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.398308   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.558911   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.879674   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:47.947641   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/no-preload-281187/client.crt: no such file or directory
E0213 23:46:48.520800   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:49.801737   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:52.362078   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:46:53.068673   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/no-preload-281187/client.crt: no such file or directory
E0213 23:46:57.482651   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:47:03.309075   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/no-preload-281187/client.crt: no such file or directory
E0213 23:47:07.722939   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
E0213 23:47:23.789481   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/no-preload-281187/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0213 23:45:39.224106   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/functional-879196/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.267333677s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-125132 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-125132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tmf9t" [ea582bbd-9ba0-466f-974b-5f38646227e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tmf9t" [ea582bbd-9ba0-466f-974b-5f38646227e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003423474s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-65pjn" [beda9b3d-891b-4ece-9a71-8eb46b0534b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004754686s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-125132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-125132 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-125132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8stv8" [2d34a872-debd-4d58-a3a6-25b57ccdfd58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8stv8" [2d34a872-debd-4d58-a3a6-25b57ccdfd58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003789191s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5h6bh" [a64573f1-e277-4a45-8af2-5270ebf6190b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005935664s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-125132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-125132 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-125132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jdl69" [b4faa612-88fd-4718-87ad-f0c974190e54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jdl69" [b4faa612-88fd-4718-87ad-f0c974190e54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004296906s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-125132 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-125132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k5kvn" [aea3703f-32d3-4e74-9ea3-0c47f327fb63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k5kvn" [aea3703f-32d3-4e74-9ea3-0c47f327fb63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004139749s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.049990424s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-125132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-125132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.551506165s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-125132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.240551007s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7v7n5" [c8e101ca-1876-42e7-8d1b-694a6916cd16] Running
E0213 23:47:28.203120   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/old-k8s-version-488781/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0041044s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-125132 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-125132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9bvcj" [eb3fc504-f512-4285-84c4-3e90df13f1eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9bvcj" [eb3fc504-f512-4285-84c4-3e90df13f1eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003674176s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-125132 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-125132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z4rk8" [63da05fe-f49a-44bd-8dc8-cccbe1cf9380] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z4rk8" [63da05fe-f49a-44bd-8dc8-cccbe1cf9380] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004349138s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-125132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-125132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-125132 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (25.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-125132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5rsxn" [ca531203-255e-4cca-a9a0-793717b1485c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5rsxn" [ca531203-255e-4cca-a9a0-793717b1485c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 25.004498237s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (25.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-125132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-125132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (27/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-118079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-118079
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-125132 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-125132" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 13 Feb 2024 23:32:30 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-475933
contexts:
- context:
cluster: kubernetes-upgrade-475933
user: kubernetes-upgrade-475933
name: kubernetes-upgrade-475933
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-475933
user:
client-certificate: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/kubernetes-upgrade-475933/client.crt
client-key: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/kubernetes-upgrade-475933/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-125132

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-125132"

                                                
                                                
----------------------- debugLogs end: kubenet-125132 [took: 5.148190346s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-125132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-125132
--- SKIP: TestNetworkPlugins/group/kubenet (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0213 23:32:49.247054   73453 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/ingress-addon-legacy-660356/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: cilium-125132 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-125132" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18169-66678/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 13 Feb 2024 23:32:30 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-475933
contexts:
- context:
cluster: kubernetes-upgrade-475933
user: kubernetes-upgrade-475933
name: kubernetes-upgrade-475933
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-475933
user:
client-certificate: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/kubernetes-upgrade-475933/client.crt
client-key: /home/jenkins/minikube-integration/18169-66678/.minikube/profiles/kubernetes-upgrade-475933/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-125132

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-125132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-125132"

                                                
                                                
----------------------- debugLogs end: cilium-125132 [took: 3.918999121s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-125132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-125132
--- SKIP: TestNetworkPlugins/group/cilium (4.09s)

                                                
                                    
Copied to clipboard