Test Report: Docker_Linux_crio 19531

                    
                      cca1ca437c91fbc205ce13fbbdef95295053f0ce:2024-08-29:35997
                    
                

Test fail (4/328)

Order failed test Duration
33 TestAddons/parallel/Registry 73
34 TestAddons/parallel/Ingress 149.83
36 TestAddons/parallel/MetricsServer 326.2
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.63
x
+
TestAddons/parallel/Registry (73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.933744ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-srp9d" [a6e6445c-947b-4527-a5b7-e1710ec0b292] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002585252s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-56c89" [c9c1a8d7-92a0-458c-a4fa-4271bfd8f736] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003857217s
addons_test.go:342: (dbg) Run:  kubectl --context addons-970414 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-970414 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-970414 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.077417355s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-970414 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 ip
2024/08/29 18:17:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-970414
helpers_test.go:235: (dbg) docker inspect addons-970414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f",
	        "Created": "2024-08-29T18:05:50.989469594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-29T18:05:51.114817177Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:33319d96a2f78fe466b6d8cbd88671515fca2b1eded3ce0b5f6d545b670a78ac",
	        "ResolvConfPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/hosts",
	        "LogPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f-json.log",
	        "Name": "/addons-970414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-970414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-970414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef-init/diff:/var/lib/docker/overlay2/05fc462985fa2f024c01de3a02bf0ead4c06c5840250f2e5986b9e50a75da4c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-970414",
	                "Source": "/var/lib/docker/volumes/addons-970414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-970414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-970414",
	                "name.minikube.sigs.k8s.io": "addons-970414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "978d127d7df61acbbd8935def9a64eff58519190d009a49d3457d2ba97b12a1f",
	            "SandboxKey": "/var/run/docker/netns/978d127d7df6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-970414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c2cbcee4e25a4578dadcd50e3b7deda46b3aa188961837c3614b63db18a2f3b7",
	                    "EndpointID": "4a8075a86adc8f2be9df3038096489cf43023ca173ac09f522f3ebac0bd13872",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-970414",
	                        "41a3cf6921c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-970414 -n addons-970414
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 logs -n 25: (1.310184951s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-236186   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-236186              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-236186              | download-only-236186   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only              | download-only-125708   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-125708              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-125708              | download-only-125708   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-236186              | download-only-236186   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-125708              | download-only-125708   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                   | download-docker-806390 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | download-docker-806390               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-806390            | download-docker-806390 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                   | binary-mirror-708315   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | binary-mirror-708315                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45431               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-708315              | binary-mirror-708315   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| addons  | enable dashboard -p                  | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-970414                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-970414                        |                        |         |         |                     |                     |
	| start   | -p addons-970414 --wait=true         | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | addons-970414                        |                        |         |         |                     |                     |
	| ssh     | addons-970414 ssh curl -s            | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| addons  | addons-970414 addons                 | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-970414 addons                 | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable         | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-970414 ip                     | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	| addons  | addons-970414 addons disable         | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:27.001060   33471 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:27.001195   33471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.001206   33471 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:27.001213   33471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.001566   33471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:05:27.002146   33471 out.go:352] Setting JSON to false
	I0829 18:05:27.002926   33471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6478,"bootTime":1724948249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:27.002981   33471 start.go:139] virtualization: kvm guest
	I0829 18:05:27.004975   33471 out.go:177] * [addons-970414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:27.006205   33471 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:05:27.006225   33471 notify.go:220] Checking for updates...
	I0829 18:05:27.008297   33471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:27.009428   33471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:05:27.010459   33471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:05:27.011630   33471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:05:27.012666   33471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:05:27.013855   33471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:27.034066   33471 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:27.034178   33471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:27.081939   33471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:05:27.073820971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:27.082037   33471 docker.go:307] overlay module found
	I0829 18:05:27.083769   33471 out.go:177] * Using the docker driver based on user configuration
	I0829 18:05:27.084831   33471 start.go:297] selected driver: docker
	I0829 18:05:27.084843   33471 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:27.084856   33471 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:05:27.085566   33471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:27.128935   33471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:05:27.120299564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:27.129150   33471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:27.129407   33471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:05:27.130954   33471 out.go:177] * Using Docker driver with root privileges
	I0829 18:05:27.132457   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:05:27.132474   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:05:27.132483   33471 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 18:05:27.132551   33471 start.go:340] cluster config:
	{Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:27.134145   33471 out.go:177] * Starting "addons-970414" primary control-plane node in "addons-970414" cluster
	I0829 18:05:27.135511   33471 cache.go:121] Beginning downloading kic base image for docker with crio
	I0829 18:05:27.137027   33471 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:05:27.138262   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:27.138302   33471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:05:27.138309   33471 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:27.138353   33471 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:05:27.138388   33471 preload.go:172] Found /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:05:27.138398   33471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:05:27.138727   33471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json ...
	I0829 18:05:27.138747   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json: {Name:mke2d7298c74312a04e88e452c7a2b0ef6f2c5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:27.153622   33471 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:27.153732   33471 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:05:27.153749   33471 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0829 18:05:27.153754   33471 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0829 18:05:27.153762   33471 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0829 18:05:27.153769   33471 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0829 18:05:38.808665   33471 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0829 18:05:38.808699   33471 cache.go:194] Successfully downloaded all kic artifacts
	I0829 18:05:38.808727   33471 start.go:360] acquireMachinesLock for addons-970414: {Name:mkb69a163e0d8e2549bad474fa195b7110791498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:38.808834   33471 start.go:364] duration metric: took 89.086µs to acquireMachinesLock for "addons-970414"
	I0829 18:05:38.808859   33471 start.go:93] Provisioning new machine with config: &{Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:05:38.808941   33471 start.go:125] createHost starting for "" (driver="docker")
	I0829 18:05:38.810903   33471 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0829 18:05:38.811159   33471 start.go:159] libmachine.API.Create for "addons-970414" (driver="docker")
	I0829 18:05:38.811196   33471 client.go:168] LocalClient.Create starting
	I0829 18:05:38.811308   33471 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem
	I0829 18:05:38.888624   33471 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem
	I0829 18:05:39.225744   33471 cli_runner.go:164] Run: docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0829 18:05:39.242445   33471 cli_runner.go:211] docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0829 18:05:39.242507   33471 network_create.go:284] running [docker network inspect addons-970414] to gather additional debugging logs...
	I0829 18:05:39.242525   33471 cli_runner.go:164] Run: docker network inspect addons-970414
	W0829 18:05:39.257100   33471 cli_runner.go:211] docker network inspect addons-970414 returned with exit code 1
	I0829 18:05:39.257130   33471 network_create.go:287] error running [docker network inspect addons-970414]: docker network inspect addons-970414: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-970414 not found
	I0829 18:05:39.257147   33471 network_create.go:289] output of [docker network inspect addons-970414]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-970414 not found
	
	** /stderr **
	I0829 18:05:39.257238   33471 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:05:39.272618   33471 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a7c8d0}
	I0829 18:05:39.272664   33471 network_create.go:124] attempt to create docker network addons-970414 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0829 18:05:39.272707   33471 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-970414 addons-970414
	I0829 18:05:39.331357   33471 network_create.go:108] docker network addons-970414 192.168.49.0/24 created
	I0829 18:05:39.331388   33471 kic.go:121] calculated static IP "192.168.49.2" for the "addons-970414" container
	I0829 18:05:39.331435   33471 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0829 18:05:39.346156   33471 cli_runner.go:164] Run: docker volume create addons-970414 --label name.minikube.sigs.k8s.io=addons-970414 --label created_by.minikube.sigs.k8s.io=true
	I0829 18:05:39.361798   33471 oci.go:103] Successfully created a docker volume addons-970414
	I0829 18:05:39.361884   33471 cli_runner.go:164] Run: docker run --rm --name addons-970414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --entrypoint /usr/bin/test -v addons-970414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0829 18:05:46.571826   33471 cli_runner.go:217] Completed: docker run --rm --name addons-970414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --entrypoint /usr/bin/test -v addons-970414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (7.209903568s)
	I0829 18:05:46.571853   33471 oci.go:107] Successfully prepared a docker volume addons-970414
	I0829 18:05:46.571874   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:46.571894   33471 kic.go:194] Starting extracting preloaded images to volume ...
	I0829 18:05:46.571970   33471 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-970414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0829 18:05:50.930587   33471 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-970414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (4.358576097s)
	I0829 18:05:50.930618   33471 kic.go:203] duration metric: took 4.358721922s to extract preloaded images to volume ...
	W0829 18:05:50.930753   33471 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0829 18:05:50.930875   33471 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0829 18:05:50.975554   33471 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-970414 --name addons-970414 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-970414 --network addons-970414 --ip 192.168.49.2 --volume addons-970414:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0829 18:05:51.268886   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Running}}
	I0829 18:05:51.285523   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.304601   33471 cli_runner.go:164] Run: docker exec addons-970414 stat /var/lib/dpkg/alternatives/iptables
	I0829 18:05:51.347960   33471 oci.go:144] the created container "addons-970414" has a running status.
	I0829 18:05:51.347988   33471 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa...
	I0829 18:05:51.440365   33471 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0829 18:05:51.459363   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.476716   33471 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0829 18:05:51.476740   33471 kic_runner.go:114] Args: [docker exec --privileged addons-970414 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0829 18:05:51.517330   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.534066   33471 machine.go:93] provisionDockerMachine start ...
	I0829 18:05:51.534151   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:51.554839   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:51.555038   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:51.555054   33471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:05:51.555753   33471 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39654->127.0.0.1:32768: read: connection reset by peer
	I0829 18:05:54.683865   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-970414
	
	I0829 18:05:54.683900   33471 ubuntu.go:169] provisioning hostname "addons-970414"
	I0829 18:05:54.683958   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:54.699445   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:54.699631   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:54.699643   33471 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-970414 && echo "addons-970414" | sudo tee /etc/hostname
	I0829 18:05:54.830897   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-970414
	
	I0829 18:05:54.830993   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:54.847116   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:54.847297   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:54.847323   33471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-970414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-970414/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-970414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:05:54.972384   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:05:54.972411   33471 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19531-25336/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-25336/.minikube}
	I0829 18:05:54.972428   33471 ubuntu.go:177] setting up certificates
	I0829 18:05:54.972440   33471 provision.go:84] configureAuth start
	I0829 18:05:54.972492   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:54.988585   33471 provision.go:143] copyHostCerts
	I0829 18:05:54.988673   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/ca.pem (1078 bytes)
	I0829 18:05:54.988829   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/cert.pem (1123 bytes)
	I0829 18:05:54.988951   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/key.pem (1679 bytes)
	I0829 18:05:54.989024   33471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem org=jenkins.addons-970414 san=[127.0.0.1 192.168.49.2 addons-970414 localhost minikube]
	I0829 18:05:55.147597   33471 provision.go:177] copyRemoteCerts
	I0829 18:05:55.147661   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:05:55.147709   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.165771   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.256506   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:05:55.276475   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:05:55.296322   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:05:55.315859   33471 provision.go:87] duration metric: took 343.406508ms to configureAuth
	I0829 18:05:55.315880   33471 ubuntu.go:193] setting minikube options for container-runtime
	I0829 18:05:55.316058   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:05:55.316165   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.332100   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:55.332269   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:55.332292   33471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:05:55.536223   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:05:55.536246   33471 machine.go:96] duration metric: took 4.002156332s to provisionDockerMachine
	I0829 18:05:55.536256   33471 client.go:171] duration metric: took 16.725048882s to LocalClient.Create
	I0829 18:05:55.536279   33471 start.go:167] duration metric: took 16.725121559s to libmachine.API.Create "addons-970414"
	I0829 18:05:55.536289   33471 start.go:293] postStartSetup for "addons-970414" (driver="docker")
	I0829 18:05:55.536302   33471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:05:55.536358   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:05:55.536404   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.552022   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.640805   33471 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:05:55.643619   33471 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:05:55.643648   33471 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:05:55.643657   33471 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:05:55.643662   33471 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0829 18:05:55.643672   33471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-25336/.minikube/addons for local assets ...
	I0829 18:05:55.643725   33471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-25336/.minikube/files for local assets ...
	I0829 18:05:55.643751   33471 start.go:296] duration metric: took 107.457009ms for postStartSetup
	I0829 18:05:55.643994   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:55.660003   33471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json ...
	I0829 18:05:55.660247   33471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:05:55.660293   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.675451   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.760973   33471 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0829 18:05:55.764592   33471 start.go:128] duration metric: took 16.955640874s to createHost
	I0829 18:05:55.764614   33471 start.go:83] releasing machines lock for "addons-970414", held for 16.955766323s
	I0829 18:05:55.764673   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:55.780103   33471 ssh_runner.go:195] Run: cat /version.json
	I0829 18:05:55.780144   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.780194   33471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:05:55.780253   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.797444   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.797887   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.953349   33471 ssh_runner.go:195] Run: systemctl --version
	I0829 18:05:55.957132   33471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:05:56.091366   33471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:05:56.095285   33471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:05:56.111209   33471 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0829 18:05:56.111281   33471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:05:56.134706   33471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0829 18:05:56.134730   33471 start.go:495] detecting cgroup driver to use...
	I0829 18:05:56.134763   33471 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:05:56.134812   33471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:05:56.147385   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:05:56.156613   33471 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:05:56.156666   33471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:05:56.168092   33471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:05:56.179938   33471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:05:56.252028   33471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:05:56.327750   33471 docker.go:233] disabling docker service ...
	I0829 18:05:56.327807   33471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:05:56.343956   33471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:05:56.353288   33471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:05:56.427251   33471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:05:56.508717   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:05:56.518265   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:05:56.531476   33471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:05:56.531549   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.539410   33471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:05:56.539458   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.547577   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.555487   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.563452   33471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:05:56.570823   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.578587   33471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.591295   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.599128   33471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:05:56.605733   33471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:05:56.612545   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:05:56.686246   33471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:05:56.769888   33471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:05:56.769948   33471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:05:56.772991   33471 start.go:563] Will wait 60s for crictl version
	I0829 18:05:56.773031   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:05:56.775690   33471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:05:56.808215   33471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0829 18:05:56.808328   33471 ssh_runner.go:195] Run: crio --version
	I0829 18:05:56.840217   33471 ssh_runner.go:195] Run: crio --version
	I0829 18:05:56.872925   33471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0829 18:05:56.874122   33471 cli_runner.go:164] Run: docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:05:56.889469   33471 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0829 18:05:56.892591   33471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:05:56.901877   33471 kubeadm.go:883] updating cluster {Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:05:56.902001   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:56.902058   33471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:05:56.960945   33471 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:05:56.960966   33471 crio.go:433] Images already preloaded, skipping extraction
	I0829 18:05:56.961005   33471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:05:56.996565   33471 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:05:56.996586   33471 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:05:56.996594   33471 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0829 18:05:56.996695   33471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-970414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:05:56.996788   33471 ssh_runner.go:195] Run: crio config
	I0829 18:05:57.034951   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:05:57.034976   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:05:57.035004   33471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:05:57.035037   33471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-970414 NodeName:addons-970414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:05:57.035200   33471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-970414"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:05:57.035264   33471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:05:57.043209   33471 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:05:57.043270   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:05:57.050815   33471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0829 18:05:57.065626   33471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:05:57.080858   33471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0829 18:05:57.095282   33471 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0829 18:05:57.098211   33471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:05:57.107337   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:05:57.174389   33471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:05:57.185656   33471 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414 for IP: 192.168.49.2
	I0829 18:05:57.185680   33471 certs.go:194] generating shared ca certs ...
	I0829 18:05:57.185701   33471 certs.go:226] acquiring lock for ca certs: {Name:mk67594a2aeddd90511e83e94fdec27741c5c194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.185831   33471 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key
	I0829 18:05:57.302579   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt ...
	I0829 18:05:57.302605   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt: {Name:mk68fcaae893468c94d7a84507010792fe808d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.302749   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key ...
	I0829 18:05:57.302759   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key: {Name:mk3ae49953961c47a1211facb56e8bc731cb5d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.302828   33471 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key
	I0829 18:05:57.397161   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt ...
	I0829 18:05:57.397188   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt: {Name:mkdea41367fabcd2965e87aed60d5a189212f9be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.397327   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key ...
	I0829 18:05:57.397337   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key: {Name:mk92e8ff155ca7dda7fa018998615e51c8a854aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.397397   33471 certs.go:256] generating profile certs ...
	I0829 18:05:57.397452   33471 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key
	I0829 18:05:57.397465   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt with IP's: []
	I0829 18:05:57.456687   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt ...
	I0829 18:05:57.456714   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: {Name:mkca0def83df75bdcbf967a5612ca78646681086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.456865   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key ...
	I0829 18:05:57.456879   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key: {Name:mk7a68ec7addac3a4cb5327ed442f621166ad28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.456954   33471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7
	I0829 18:05:57.456972   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0829 18:05:57.557157   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 ...
	I0829 18:05:57.557189   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7: {Name:mk1e987fdce57178fa8bc6d220419e4e702f2022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.557369   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7 ...
	I0829 18:05:57.557386   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7: {Name:mkcb99136185dcb54ad76bcdd5f51f3bb874c708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.557477   33471 certs.go:381] copying /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 -> /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt
	I0829 18:05:57.557565   33471 certs.go:385] copying /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7 -> /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key
	I0829 18:05:57.557628   33471 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key
	I0829 18:05:57.557653   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt with IP's: []
	I0829 18:05:57.665009   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt ...
	I0829 18:05:57.665035   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt: {Name:mka7b9add077f78b858c255a0787554628ae81a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.665204   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key ...
	I0829 18:05:57.665218   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key: {Name:mkf9f0b064442d85a7a36a00447d2e06028bbb5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.665423   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:05:57.665464   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:05:57.665500   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:05:57.665529   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem (1679 bytes)
	I0829 18:05:57.666108   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:05:57.687482   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:05:57.707435   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:05:57.727015   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 18:05:57.746595   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:05:57.766741   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:05:57.786768   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:05:57.806898   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:05:57.827052   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:05:57.847405   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:05:57.862668   33471 ssh_runner.go:195] Run: openssl version
	I0829 18:05:57.867441   33471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:05:57.875492   33471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.878530   33471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.878584   33471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.884877   33471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:05:57.892902   33471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:05:57.895580   33471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:05:57.895625   33471 kubeadm.go:392] StartCluster: {Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:57.895692   33471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:05:57.895727   33471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:05:57.927582   33471 cri.go:89] found id: ""
	I0829 18:05:57.927651   33471 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:05:57.935503   33471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:05:57.943410   33471 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0829 18:05:57.943456   33471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:05:57.950627   33471 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:05:57.950644   33471 kubeadm.go:157] found existing configuration files:
	
	I0829 18:05:57.950673   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:05:57.957427   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:05:57.957467   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:05:57.964066   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:05:57.971025   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:05:57.971075   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:05:57.977703   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:05:57.984450   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:05:57.984488   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:05:57.991201   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:05:57.998415   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:05:57.998451   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:05:58.005349   33471 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0829 18:05:58.038494   33471 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:05:58.038555   33471 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:05:58.053584   33471 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0829 18:05:58.053680   33471 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0829 18:05:58.053730   33471 kubeadm.go:310] OS: Linux
	I0829 18:05:58.053800   33471 kubeadm.go:310] CGROUPS_CPU: enabled
	I0829 18:05:58.053884   33471 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0829 18:05:58.053987   33471 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0829 18:05:58.054064   33471 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0829 18:05:58.054137   33471 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0829 18:05:58.054208   33471 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0829 18:05:58.054265   33471 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0829 18:05:58.054348   33471 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0829 18:05:58.054436   33471 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0829 18:05:58.098180   33471 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:05:58.098301   33471 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:05:58.098433   33471 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:05:58.103771   33471 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:05:58.106952   33471 out.go:235]   - Generating certificates and keys ...
	I0829 18:05:58.107046   33471 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:05:58.107111   33471 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:05:58.350564   33471 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:05:58.490294   33471 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:05:58.689041   33471 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:05:58.823978   33471 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:05:58.996208   33471 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:05:58.996351   33471 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-970414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:05:59.072936   33471 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:05:59.073085   33471 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-970414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:05:59.434980   33471 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:05:59.665647   33471 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:05:59.738102   33471 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:05:59.738192   33471 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:05:59.867228   33471 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:00.066025   33471 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:00.133026   33471 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:00.270509   33471 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:00.374793   33471 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:00.375247   33471 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:00.377672   33471 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:00.379594   33471 out.go:235]   - Booting up control plane ...
	I0829 18:06:00.379700   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:00.379784   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:00.379861   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:00.387817   33471 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:00.392895   33471 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:00.392953   33471 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:00.472796   33471 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:00.472952   33471 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:00.974304   33471 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.649814ms
	I0829 18:06:00.974388   33471 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:05.476183   33471 kubeadm.go:310] [api-check] The API server is healthy after 4.501825265s
	I0829 18:06:05.486362   33471 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:05.496924   33471 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:05.512283   33471 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:05.512547   33471 kubeadm.go:310] [mark-control-plane] Marking the node addons-970414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:05.518748   33471 kubeadm.go:310] [bootstrap-token] Using token: jzv7iv.d89b87p5nvbumrzo
	I0829 18:06:05.520189   33471 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:05.520291   33471 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:05.522825   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:05.527262   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:05.530214   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:05.532304   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:05.534332   33471 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:05.883610   33471 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:06.302786   33471 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:06.881022   33471 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:06.881690   33471 kubeadm.go:310] 
	I0829 18:06:06.881760   33471 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:06.881773   33471 kubeadm.go:310] 
	I0829 18:06:06.881882   33471 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:06.881912   33471 kubeadm.go:310] 
	I0829 18:06:06.881972   33471 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:06.882062   33471 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:06.882212   33471 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:06.882230   33471 kubeadm.go:310] 
	I0829 18:06:06.882324   33471 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:06.882338   33471 kubeadm.go:310] 
	I0829 18:06:06.882403   33471 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:06.882413   33471 kubeadm.go:310] 
	I0829 18:06:06.882485   33471 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:06.882586   33471 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:06.882657   33471 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:06.882663   33471 kubeadm.go:310] 
	I0829 18:06:06.882741   33471 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:06.882807   33471 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:06.882813   33471 kubeadm.go:310] 
	I0829 18:06:06.882918   33471 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jzv7iv.d89b87p5nvbumrzo \
	I0829 18:06:06.883051   33471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ded35ef35e12d5a5396aa817ddf8ddaebf53b89969d35d052dfa46966e0eb6d3 \
	I0829 18:06:06.883081   33471 kubeadm.go:310] 	--control-plane 
	I0829 18:06:06.883091   33471 kubeadm.go:310] 
	I0829 18:06:06.883194   33471 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:06.883202   33471 kubeadm.go:310] 
	I0829 18:06:06.883319   33471 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jzv7iv.d89b87p5nvbumrzo \
	I0829 18:06:06.883476   33471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ded35ef35e12d5a5396aa817ddf8ddaebf53b89969d35d052dfa46966e0eb6d3 
	I0829 18:06:06.885210   33471 kubeadm.go:310] W0829 18:05:58.036060    1290 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:06.885484   33471 kubeadm.go:310] W0829 18:05:58.036646    1290 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:06.885706   33471 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0829 18:06:06.885836   33471 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:06.885860   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:06:06.885869   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:06:06.887826   33471 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0829 18:06:06.888997   33471 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0829 18:06:06.892550   33471 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0829 18:06:06.892565   33471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0829 18:06:06.908633   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0829 18:06:07.090336   33471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:07.090410   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:07.090410   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-970414 minikube.k8s.io/updated_at=2024_08_29T18_06_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-970414 minikube.k8s.io/primary=true
	I0829 18:06:07.097357   33471 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:07.161653   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:07.662656   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:08.162155   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:08.662485   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:09.161763   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:09.662365   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:10.162060   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:10.662667   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:11.161738   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:11.225686   33471 kubeadm.go:1113] duration metric: took 4.135333724s to wait for elevateKubeSystemPrivileges
	I0829 18:06:11.225730   33471 kubeadm.go:394] duration metric: took 13.330107637s to StartCluster
	I0829 18:06:11.225753   33471 settings.go:142] acquiring lock: {Name:mk30ad9b0ff80001a546f289c6cc726b4c74119c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:11.225898   33471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:06:11.226419   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/kubeconfig: {Name:mk79bdfdd62fbbebbe9b38ab62c3c3cce586ee25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:11.226636   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:11.226662   33471 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:11.226708   33471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:11.226817   33471 addons.go:69] Setting yakd=true in profile "addons-970414"
	I0829 18:06:11.226855   33471 addons.go:69] Setting inspektor-gadget=true in profile "addons-970414"
	I0829 18:06:11.226879   33471 addons.go:69] Setting metrics-server=true in profile "addons-970414"
	I0829 18:06:11.226895   33471 addons.go:234] Setting addon metrics-server=true in "addons-970414"
	I0829 18:06:11.226899   33471 addons.go:234] Setting addon inspektor-gadget=true in "addons-970414"
	I0829 18:06:11.226924   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226936   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226947   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:11.227018   33471 addons.go:69] Setting storage-provisioner=true in profile "addons-970414"
	I0829 18:06:11.227040   33471 addons.go:234] Setting addon storage-provisioner=true in "addons-970414"
	I0829 18:06:11.227065   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227153   33471 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-970414"
	I0829 18:06:11.227185   33471 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-970414"
	I0829 18:06:11.227245   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227436   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227450   33471 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-970414"
	I0829 18:06:11.227475   33471 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-970414"
	I0829 18:06:11.227599   33471 addons.go:69] Setting volcano=true in profile "addons-970414"
	I0829 18:06:11.227602   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227615   33471 addons.go:69] Setting registry=true in profile "addons-970414"
	I0829 18:06:11.227633   33471 addons.go:234] Setting addon volcano=true in "addons-970414"
	I0829 18:06:11.227658   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227660   33471 addons.go:234] Setting addon registry=true in "addons-970414"
	I0829 18:06:11.227676   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227689   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227696   33471 addons.go:69] Setting volumesnapshots=true in profile "addons-970414"
	I0829 18:06:11.227718   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227722   33471 addons.go:234] Setting addon volumesnapshots=true in "addons-970414"
	I0829 18:06:11.227771   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.228076   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228080   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228209   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228356   33471 addons.go:69] Setting gcp-auth=true in profile "addons-970414"
	I0829 18:06:11.228388   33471 mustload.go:65] Loading cluster: addons-970414
	I0829 18:06:11.228584   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:11.228856   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.229427   33471 addons.go:69] Setting ingress=true in profile "addons-970414"
	I0829 18:06:11.229880   33471 addons.go:234] Setting addon ingress=true in "addons-970414"
	I0829 18:06:11.230054   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226869   33471 addons.go:234] Setting addon yakd=true in "addons-970414"
	I0829 18:06:11.232989   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.233478   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227436   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.230761   33471 addons.go:69] Setting ingress-dns=true in profile "addons-970414"
	I0829 18:06:11.234357   33471 addons.go:234] Setting addon ingress-dns=true in "addons-970414"
	I0829 18:06:11.230771   33471 addons.go:69] Setting helm-tiller=true in profile "addons-970414"
	I0829 18:06:11.234426   33471 addons.go:234] Setting addon helm-tiller=true in "addons-970414"
	I0829 18:06:11.234428   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.234448   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.230778   33471 addons.go:69] Setting default-storageclass=true in profile "addons-970414"
	I0829 18:06:11.234865   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.234865   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.234897   33471 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-970414"
	I0829 18:06:11.235176   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.235691   33471 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:11.230855   33471 addons.go:69] Setting cloud-spanner=true in profile "addons-970414"
	I0829 18:06:11.230860   33471 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-970414"
	I0829 18:06:11.236330   33471 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-970414"
	I0829 18:06:11.236358   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.232013   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.236617   33471 addons.go:234] Setting addon cloud-spanner=true in "addons-970414"
	I0829 18:06:11.236656   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.238585   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:11.273627   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	W0829 18:06:11.273734   33471 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:06:11.274066   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.279122   33471 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:11.280278   33471 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:11.280382   33471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:11.280402   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:11.280450   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.280843   33471 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-970414"
	I0829 18:06:11.280884   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.281352   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.282826   33471 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:11.284222   33471 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:11.284250   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:11.284308   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.284471   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.287508   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:11.291534   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:11.291568   33471 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:11.291622   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.293330   33471 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:11.295302   33471 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:11.295320   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:11.295376   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.299261   33471 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:11.300709   33471 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:11.300725   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:11.300791   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.300909   33471 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:11.302087   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:11.302105   33471 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:11.302160   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.307761   33471 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:11.309677   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:11.309700   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:11.309766   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.320621   33471 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:11.325003   33471 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:11.325029   33471 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:11.325160   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.326386   33471 addons.go:234] Setting addon default-storageclass=true in "addons-970414"
	I0829 18:06:11.326435   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.326941   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.339593   33471 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:11.339663   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:11.342553   33471 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:11.342633   33471 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:11.344001   33471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:11.344018   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:11.344070   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.344221   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:11.344232   33471 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:11.344271   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.344391   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:11.346102   33471 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:11.347855   33471 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:11.348296   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:11.348371   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.350422   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:11.351792   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:11.354381   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:11.355688   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:11.357044   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:11.358332   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:11.360150   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:11.362855   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:11.364094   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:11.364346   33471 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:11.364366   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:11.364422   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.366038   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:11.366057   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:11.366122   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.368590   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.368828   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.377128   33471 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:11.377144   33471 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:11.377195   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.382256   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.392162   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.401881   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411536   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411725   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411872   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.412557   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.413514   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.414653   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.415906   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.417956   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.421100   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	W0829 18:06:11.447767   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.447799   33471 retry.go:31] will retry after 276.757001ms: ssh: handshake failed: EOF
	W0829 18:06:11.449293   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.449316   33471 retry.go:31] will retry after 138.739567ms: ssh: handshake failed: EOF
	I0829 18:06:11.457483   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:11.569695   33471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0829 18:06:11.646095   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.646126   33471 retry.go:31] will retry after 425.215295ms: ssh: handshake failed: EOF
	I0829 18:06:11.667860   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:11.667890   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:11.765345   33471 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:11.765373   33471 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:11.848126   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:11.848497   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:11.848514   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:11.859073   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:11.859100   33471 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:11.863017   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:11.864173   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:11.948210   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:11.948298   33471 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:11.948267   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:11.948345   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:11.948424   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:11.951036   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:11.951054   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:11.955551   33471 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:11.955617   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:11.965568   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:11.967321   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:11.967346   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:12.047508   33471 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:12.047545   33471 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:12.060080   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:12.060105   33471 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:12.145272   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:12.145358   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:12.153120   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:12.153146   33471 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:12.167673   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:12.256341   33471 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:12.256372   33471 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:12.346507   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:12.346537   33471 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:12.351483   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:12.355630   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:12.358674   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:12.358700   33471 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:12.464885   33471 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.007354776s)
	I0829 18:06:12.464974   33471 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:12.465902   33471 node_ready.go:35] waiting up to 6m0s for node "addons-970414" to be "Ready" ...
	I0829 18:06:12.554150   33471 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:12.554184   33471 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:12.564392   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:12.564475   33471 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:12.647807   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:12.647836   33471 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:12.651834   33471 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:12.651871   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:12.659639   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:12.659667   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:12.850643   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:12.850731   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:12.954879   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:13.046953   33471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:13.046981   33471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:13.050318   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:13.061740   33471 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-970414" context rescaled to 1 replicas
	I0829 18:06:13.161545   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:13.161570   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:13.352888   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:13.359173   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:13.359202   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:13.368369   33471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:13.368396   33471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:13.446352   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:13.658489   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:13.658522   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:13.863922   33471 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:13.863951   33471 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:14.153008   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:14.153084   33471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:14.265801   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:14.265888   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:14.346440   33471 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:14.346546   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:14.457711   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:14.467018   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:14.467092   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:14.664740   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:15.054818   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:15.054890   33471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:15.449637   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:15.751232   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.903064634s)
	I0829 18:06:15.751343   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.888297403s)
	I0829 18:06:16.167149   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.302940806s)
	I0829 18:06:16.167480   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.219108729s)
	I0829 18:06:16.167583   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.201985375s)
	I0829 18:06:16.167666   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.999962951s)
	I0829 18:06:16.167708   33471 addons.go:475] Verifying addon registry=true in "addons-970414"
	I0829 18:06:16.167991   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.81647825s)
	I0829 18:06:16.168188   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.812528468s)
	I0829 18:06:16.169994   33471 out.go:177] * Verifying registry addon...
	I0829 18:06:16.172294   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:06:16.355174   33471 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:16.355543   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0829 18:06:16.453902   33471 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0829 18:06:16.760111   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:17.052900   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:17.348900   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:17.746877   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.247659   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.568953   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:06:18.569108   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:18.586232   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:18.748308   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.768683   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.813695623s)
	W0829 18:06:18.768747   33471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:18.768797   33471 retry.go:31] will retry after 129.631111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:18.768934   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.718574104s)
	I0829 18:06:18.768956   33471 addons.go:475] Verifying addon metrics-server=true in "addons-970414"
	I0829 18:06:18.769122   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.416207866s)
	I0829 18:06:18.769138   33471 addons.go:475] Verifying addon ingress=true in "addons-970414"
	I0829 18:06:18.769584   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.323191353s)
	I0829 18:06:18.769666   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.311841863s)
	I0829 18:06:18.772109   33471 out.go:177] * Verifying ingress addon...
	I0829 18:06:18.772111   33471 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-970414 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:06:18.774901   33471 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:06:18.784226   33471 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:06:18.784247   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:18.864874   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:06:18.881665   33471 addons.go:234] Setting addon gcp-auth=true in "addons-970414"
	I0829 18:06:18.881720   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:18.882075   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:18.899292   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:18.901129   33471 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:06:18.901171   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:18.920489   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:19.177567   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:19.286115   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:19.554486   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:19.749842   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:19.848969   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.399253398s)
	I0829 18:06:19.849250   33471 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-970414"
	I0829 18:06:19.851236   33471 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:06:19.854515   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:06:19.869263   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:19.870161   33471 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:19.870184   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:20.176202   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:20.279572   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:20.357721   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:20.675473   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:20.778813   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:20.857772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.176058   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:21.279045   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:21.357952   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.676019   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:21.778347   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:21.854733   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.955399804s)
	I0829 18:06:21.854798   33471 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.953652114s)
	I0829 18:06:21.857145   33471 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:06:21.857500   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.859828   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:21.861259   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:06:21.861280   33471 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:06:21.879467   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:06:21.879489   33471 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:06:21.895886   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:21.895909   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:06:21.954214   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:21.969114   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:22.176618   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:22.279610   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:22.358000   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:22.553735   33471 addons.go:475] Verifying addon gcp-auth=true in "addons-970414"
	I0829 18:06:22.555569   33471 out.go:177] * Verifying gcp-auth addon...
	I0829 18:06:22.558244   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:06:22.560579   33471 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:06:22.560596   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:22.674902   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:22.778700   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:22.858118   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:23.061602   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:23.175002   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:23.278641   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:23.358524   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:23.561370   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:23.675585   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:23.778306   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:23.857475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:24.061119   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:24.175442   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:24.278372   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:24.357538   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:24.468405   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:24.562284   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:24.676070   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:24.778626   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:24.857813   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:25.061006   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:25.175734   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:25.278745   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:25.357486   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:25.562423   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:25.675499   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:25.778380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:25.857418   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:26.061541   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:26.174800   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:26.278575   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:26.357690   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:26.468882   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:26.561126   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:26.675797   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:26.778490   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:26.857597   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:27.061577   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:27.174998   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:27.278808   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:27.357899   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:27.561262   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:27.675554   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:27.778294   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:27.857440   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.061639   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:28.175012   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:28.278856   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:28.358354   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.560629   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:28.674835   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:28.778609   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:28.857628   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.968635   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:29.060906   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:29.175160   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:29.279076   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:29.358063   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:29.561636   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:29.674927   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:29.779426   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:29.863822   33471 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:29.863848   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:29.969260   33471 node_ready.go:49] node "addons-970414" has status "Ready":"True"
	I0829 18:06:29.969289   33471 node_ready.go:38] duration metric: took 17.50332165s for node "addons-970414" to be "Ready" ...
	I0829 18:06:29.969301   33471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:29.977908   33471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:30.061963   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:30.176070   33471 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:30.176093   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:30.279944   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:30.381182   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:30.561917   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:30.675717   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:30.779380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:30.858908   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.061158   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:31.175903   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:31.278733   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:31.360013   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.483381   33471 pod_ready.go:93] pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.483402   33471 pod_ready.go:82] duration metric: took 1.505470075s for pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.483421   33471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.487161   33471 pod_ready.go:93] pod "etcd-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.487178   33471 pod_ready.go:82] duration metric: took 3.750939ms for pod "etcd-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.487191   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.490614   33471 pod_ready.go:93] pod "kube-apiserver-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.490632   33471 pod_ready.go:82] duration metric: took 3.434179ms for pod "kube-apiserver-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.490640   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.493931   33471 pod_ready.go:93] pod "kube-controller-manager-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.493950   33471 pod_ready.go:82] duration metric: took 3.301077ms for pod "kube-controller-manager-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.493962   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwgq4" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.561772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:31.569942   33471 pod_ready.go:93] pod "kube-proxy-mwgq4" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.569964   33471 pod_ready.go:82] duration metric: took 75.994271ms for pod "kube-proxy-mwgq4" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.569973   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.676604   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:31.779535   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:31.859414   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.970319   33471 pod_ready.go:93] pod "kube-scheduler-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.970345   33471 pod_ready.go:82] duration metric: took 400.364012ms for pod "kube-scheduler-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.970358   33471 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:32.062142   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:32.175938   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:32.279359   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:32.358320   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:32.562203   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:32.675175   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:32.779380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:32.858562   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.061806   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:33.175414   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:33.278190   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:33.359497   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.566753   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:33.679816   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:33.780038   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:33.859085   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.976545   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:34.061607   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:34.175533   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:34.278647   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:34.358847   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:34.562865   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:34.676116   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:34.778980   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:34.859383   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.061690   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:35.175979   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:35.278700   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:35.358987   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.561990   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:35.676053   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:35.778889   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:35.859309   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.978326   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:36.061789   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:36.175701   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:36.278911   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:36.358733   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:36.561288   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:36.675973   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:36.778702   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:36.859052   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:37.062147   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:37.175953   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:37.278732   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:37.358897   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:37.562562   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:37.677246   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:37.779993   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:37.858836   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:38.061840   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:38.175582   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:38.279853   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:38.358730   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:38.475807   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:38.562000   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:38.675376   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:38.779020   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:38.858866   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:39.061799   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:39.175516   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:39.278386   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:39.358349   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:39.561877   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:39.675407   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:39.778631   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:39.858049   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:40.061166   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:40.175901   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.279026   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.361677   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:40.476589   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:40.562707   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:40.677196   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.778687   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.858582   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.062646   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:41.179136   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.278942   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.359243   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.561503   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:41.676508   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.779738   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.859475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.062106   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:42.176135   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.279258   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.358777   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.562048   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:42.675925   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.779048   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.879772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.975713   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:43.061010   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:43.175551   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.279093   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.358897   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:43.562475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:43.675599   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.778529   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.858277   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.062101   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:44.176457   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.279344   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.357937   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.562224   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:44.676679   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.779034   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.858759   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.976061   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:45.061405   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:45.176561   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.278694   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.358550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:45.562365   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:45.675919   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.778988   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.858884   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.061118   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:46.175480   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.278388   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.358500   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.561876   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:46.676217   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.779623   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.858934   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.976665   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:47.062438   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:47.176856   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.279274   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.360207   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:47.562049   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:47.676310   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.847611   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.860403   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.061438   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:48.176542   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.279914   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:48.358708   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.561468   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:48.676103   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.779307   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:48.858934   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.062411   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:49.175774   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.279108   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.358770   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.475745   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:49.561498   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:49.676506   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.779122   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.859246   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.061522   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:50.184207   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.285183   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.359392   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.563222   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:50.676338   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.779289   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.859315   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.063561   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:51.175786   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.278876   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.359522   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.477135   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:51.561730   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:51.675433   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.779706   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.858484   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.061448   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:52.176160   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.279349   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.380355   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.561333   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:52.675905   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.778605   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.858471   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.061429   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:53.176294   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.279494   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.358900   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.561935   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:53.675675   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.780447   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.858317   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.975085   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:54.061527   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:54.176015   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.278916   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.358728   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:54.561195   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:54.676074   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.778888   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.858526   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.061961   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:55.175994   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.278912   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.358696   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.562439   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:55.676087   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.779100   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.858417   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.975459   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:56.060830   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:56.175297   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.279178   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:56.358860   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:56.561356   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:56.676270   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.779497   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:56.859993   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.062783   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:57.254123   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.348605   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.359917   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.561267   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:57.748519   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.849949   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.859389   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.049715   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:58.061798   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:58.176534   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.348936   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.359169   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.561969   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:58.676240   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.779911   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.858659   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.062278   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:59.176444   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.279797   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.359146   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.561362   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:59.676652   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.778887   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.859071   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.061841   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:00.176029   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.278919   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:00.359145   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.476358   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:00.562430   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:00.676749   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.778262   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:00.859251   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.061470   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:01.176363   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.279417   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.361332   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.562496   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:01.676178   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.779058   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.859261   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.061640   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:02.175950   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.279315   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.359088   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.476615   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:02.561997   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:02.675860   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.778891   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.859381   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.061658   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:03.175437   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.279450   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.380178   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.561274   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:03.676141   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.778914   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.858550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.061119   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:04.175986   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.279413   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.358524   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.476911   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:04.561419   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:04.676126   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.779641   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.859408   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.061403   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:05.176552   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.278788   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.358106   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.561720   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:05.677343   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.779750   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.858550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.061549   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:06.176475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.279830   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.358299   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.561385   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:06.676305   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.779396   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.858256   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.976151   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:07.062281   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.176114   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.279243   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.359098   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.561770   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.675691   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.778345   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.858383   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.062024   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.175973   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.278626   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.359845   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.562272   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.676299   33471 kapi.go:107] duration metric: took 52.503998136s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:08.779614   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.858729   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.061667   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.278948   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.358825   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.475603   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:09.561133   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.803043   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.869349   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.061639   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.279248   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.358623   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.561862   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.779245   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.858210   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.062082   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.279187   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.380296   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.476169   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:11.562124   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.780090   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.859518   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.061848   33471 kapi.go:107] duration metric: took 49.50360321s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:07:12.064235   33471 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-970414 cluster.
	I0829 18:07:12.065845   33471 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:07:12.067312   33471 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:07:12.279829   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.380390   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.781279   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.858496   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.279475   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.357989   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.476371   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:13.778868   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.859012   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.278985   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.358948   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.778506   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.858490   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.279669   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.358327   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.778714   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.859145   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.975951   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:16.279416   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.358353   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.778955   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.879919   33471 kapi.go:107] duration metric: took 57.025400666s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:07:17.278506   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.779629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.279662   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.475735   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:18.778865   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.279629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.778833   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.279629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.476070   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:20.779310   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.278746   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.778588   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.279091   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.778744   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.975809   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:23.279672   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.778698   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.279136   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.779600   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.975845   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:25.279527   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.778694   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.279166   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.778678   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.976229   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:27.279572   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.779925   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.278543   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.778902   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.279513   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.475862   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:29.778825   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.278410   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.779205   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.278785   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.778310   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.975687   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:32.279208   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.778950   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.278632   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.778869   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.975755   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:34.279008   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.849062   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.279182   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.849707   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.047727   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:36.348740   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.779662   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.279104   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.779192   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.279217   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.476596   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:38.778967   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.279557   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.778520   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.279154   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.781434   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.976165   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:41.300505   33471 kapi.go:107] duration metric: took 1m22.525606095s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:07:41.302197   33471 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, helm-tiller, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0829 18:07:41.303840   33471 addons.go:510] duration metric: took 1m30.077118852s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin helm-tiller storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0829 18:07:43.475312   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:45.475559   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:47.975734   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:50.475293   33471 pod_ready.go:93] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:50.475315   33471 pod_ready.go:82] duration metric: took 1m18.504950495s for pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.475325   33471 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.479409   33471 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:50.479430   33471 pod_ready.go:82] duration metric: took 4.09992ms for pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.479449   33471 pod_ready.go:39] duration metric: took 1m20.510134495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:50.479465   33471 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:50.479496   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:50.479553   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:50.512656   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:50.512676   33471 cri.go:89] found id: ""
	I0829 18:07:50.512684   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:50.512723   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.515973   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:50.516034   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:50.548643   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:50.548662   33471 cri.go:89] found id: ""
	I0829 18:07:50.548669   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:50.548718   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.551901   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:50.551963   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:50.583669   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:50.583702   33471 cri.go:89] found id: ""
	I0829 18:07:50.583709   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:50.583748   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.586859   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:50.586933   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:50.618860   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:50.618883   33471 cri.go:89] found id: ""
	I0829 18:07:50.618890   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:50.618930   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.622032   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:50.622084   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:50.653704   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:50.653729   33471 cri.go:89] found id: ""
	I0829 18:07:50.653740   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:50.653792   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.657019   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:50.657077   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:50.690012   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:50.690036   33471 cri.go:89] found id: ""
	I0829 18:07:50.690045   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:50.690086   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.693191   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:50.693236   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:50.726118   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:50.726139   33471 cri.go:89] found id: ""
	I0829 18:07:50.726149   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:50.726190   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.729505   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:50.729526   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:50.767861   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:50.767892   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:50.779540   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:50.779567   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:50.822562   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:50.822592   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:50.872590   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:50.872628   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:50.904925   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:50.904951   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:50.960999   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:50.961033   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:50.993169   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:50.993195   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:51.072501   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:51.072533   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:51.113527   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:51.113556   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:51.183067   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:51.183100   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:51.281419   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:51.281446   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:53.816429   33471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:53.829736   33471 api_server.go:72] duration metric: took 1m42.603041834s to wait for apiserver process to appear ...
	I0829 18:07:53.829767   33471 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:53.829801   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:53.829844   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:53.862325   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:53.862351   33471 cri.go:89] found id: ""
	I0829 18:07:53.862361   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:53.862409   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.865569   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:53.865646   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:53.898226   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:53.898247   33471 cri.go:89] found id: ""
	I0829 18:07:53.898255   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:53.898296   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.901566   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:53.901628   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:53.934199   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:53.934218   33471 cri.go:89] found id: ""
	I0829 18:07:53.934225   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:53.934265   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.937354   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:53.937402   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:53.970450   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:53.970472   33471 cri.go:89] found id: ""
	I0829 18:07:53.970479   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:53.970524   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.973830   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:53.973887   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:54.006146   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:54.006169   33471 cri.go:89] found id: ""
	I0829 18:07:54.006177   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:54.006224   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.009454   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:54.009512   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:54.041172   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:54.041191   33471 cri.go:89] found id: ""
	I0829 18:07:54.041198   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:54.041249   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.044312   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:54.044368   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:54.083976   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:54.084001   33471 cri.go:89] found id: ""
	I0829 18:07:54.084009   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:54.084049   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.087300   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:54.087324   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:54.098754   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:54.098782   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:54.161684   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:54.161716   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:54.241049   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:54.241085   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:54.273621   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:54.273646   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:54.331096   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:54.331132   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:54.363448   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:54.363477   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:54.431857   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:54.431896   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:54.528063   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:54.528089   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:54.577648   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:54.577681   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:54.611916   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:54.611946   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:54.647955   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:54.647983   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:57.189075   33471 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0829 18:07:57.192542   33471 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0829 18:07:57.193379   33471 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:57.193402   33471 api_server.go:131] duration metric: took 3.363628924s to wait for apiserver health ...
	I0829 18:07:57.193411   33471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:57.193432   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:57.193471   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:57.225819   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:57.225841   33471 cri.go:89] found id: ""
	I0829 18:07:57.225850   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:57.225896   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.228901   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:57.228944   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:57.260637   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:57.260656   33471 cri.go:89] found id: ""
	I0829 18:07:57.260663   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:57.260704   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.263753   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:57.263801   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:57.294974   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:57.294997   33471 cri.go:89] found id: ""
	I0829 18:07:57.295006   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:57.295058   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.298097   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:57.298155   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:57.329667   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:57.329690   33471 cri.go:89] found id: ""
	I0829 18:07:57.329698   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:57.329749   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.332928   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:57.332984   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:57.364944   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:57.364962   33471 cri.go:89] found id: ""
	I0829 18:07:57.364970   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:57.365005   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.368114   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:57.368166   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:57.401257   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:57.401276   33471 cri.go:89] found id: ""
	I0829 18:07:57.401283   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:57.401332   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.404460   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:57.404506   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:57.435578   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:57.435600   33471 cri.go:89] found id: ""
	I0829 18:07:57.435607   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:57.435647   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.438689   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:57.438711   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:57.493400   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:57.493428   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:57.565541   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:57.565577   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:57.635720   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:57.635750   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:57.647194   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:57.647217   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:57.689192   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:57.689228   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:57.738329   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:57.738357   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:57.771675   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:57.771698   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:57.802656   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:57.802684   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:57.842425   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:57.842451   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:57.937146   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:57.937174   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:57.974724   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:57.974752   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:08:00.516381   33471 system_pods.go:59] 19 kube-system pods found
	I0829 18:08:00.516420   33471 system_pods.go:61] "coredns-6f6b679f8f-jxrb9" [99ffdce3-4a2f-4216-95ca-28db164333a2] Running
	I0829 18:08:00.516426   33471 system_pods.go:61] "csi-hostpath-attacher-0" [b33c21ec-bc06-47b0-b7b4-78c5392d31f7] Running
	I0829 18:08:00.516431   33471 system_pods.go:61] "csi-hostpath-resizer-0" [ae955038-1da8-4d77-a461-9dccfe623922] Running
	I0829 18:08:00.516437   33471 system_pods.go:61] "csi-hostpathplugin-5wlj7" [c7f02d44-110a-4971-b90a-521977151630] Running
	I0829 18:08:00.516442   33471 system_pods.go:61] "etcd-addons-970414" [8daf5c22-02d4-44e0-8a5c-0d5b9c0cd7b5] Running
	I0829 18:08:00.516447   33471 system_pods.go:61] "kindnet-95zg6" [612be856-b5ad-4571-9908-168f86f5b273] Running
	I0829 18:08:00.516452   33471 system_pods.go:61] "kube-apiserver-addons-970414" [549d4f3b-086e-40f7-9b7a-513220af52cd] Running
	I0829 18:08:00.516457   33471 system_pods.go:61] "kube-controller-manager-addons-970414" [00d3410f-773e-471f-9716-7fc678c6f5a3] Running
	I0829 18:08:00.516466   33471 system_pods.go:61] "kube-ingress-dns-minikube" [6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1] Running
	I0829 18:08:00.516471   33471 system_pods.go:61] "kube-proxy-mwgq4" [39ef4c84-6d42-40f2-9eb2-af13d2c9a233] Running
	I0829 18:08:00.516479   33471 system_pods.go:61] "kube-scheduler-addons-970414" [75453275-6d16-4fc0-944d-d30987bfccb2] Running
	I0829 18:08:00.516485   33471 system_pods.go:61] "metrics-server-8988944d9-jss9n" [a866f6c5-ff40-4062-986b-ddae9310879c] Running
	I0829 18:08:00.516490   33471 system_pods.go:61] "nvidia-device-plugin-daemonset-njmrn" [5c975a82-28c1-431d-b4e4-b89312486f53] Running
	I0829 18:08:00.516497   33471 system_pods.go:61] "registry-6fb4cdfc84-srp9d" [a6e6445c-947b-4527-a5b7-e1710ec0b292] Running
	I0829 18:08:00.516500   33471 system_pods.go:61] "registry-proxy-56c89" [c9c1a8d7-92a0-458c-a4fa-4271bfd8f736] Running
	I0829 18:08:00.516506   33471 system_pods.go:61] "snapshot-controller-56fcc65765-c9pzh" [b3e9483b-e20c-4b8d-b5b4-53940d1f7621] Running
	I0829 18:08:00.516509   33471 system_pods.go:61] "snapshot-controller-56fcc65765-w7vbq" [0a038557-f899-4971-87c0-4a476ae40ff9] Running
	I0829 18:08:00.516513   33471 system_pods.go:61] "storage-provisioner" [7cffe50e-abe7-4d9c-9c04-88e86ad1ffb9] Running
	I0829 18:08:00.516516   33471 system_pods.go:61] "tiller-deploy-b48cc5f79-h8shr" [53f4571a-d63e-4721-aa85-b44922772189] Running
	I0829 18:08:00.516522   33471 system_pods.go:74] duration metric: took 3.32310726s to wait for pod list to return data ...
	I0829 18:08:00.516531   33471 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:08:00.518762   33471 default_sa.go:45] found service account: "default"
	I0829 18:08:00.518781   33471 default_sa.go:55] duration metric: took 2.241797ms for default service account to be created ...
	I0829 18:08:00.518789   33471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:08:00.527444   33471 system_pods.go:86] 19 kube-system pods found
	I0829 18:08:00.527470   33471 system_pods.go:89] "coredns-6f6b679f8f-jxrb9" [99ffdce3-4a2f-4216-95ca-28db164333a2] Running
	I0829 18:08:00.527475   33471 system_pods.go:89] "csi-hostpath-attacher-0" [b33c21ec-bc06-47b0-b7b4-78c5392d31f7] Running
	I0829 18:08:00.527479   33471 system_pods.go:89] "csi-hostpath-resizer-0" [ae955038-1da8-4d77-a461-9dccfe623922] Running
	I0829 18:08:00.527483   33471 system_pods.go:89] "csi-hostpathplugin-5wlj7" [c7f02d44-110a-4971-b90a-521977151630] Running
	I0829 18:08:00.527486   33471 system_pods.go:89] "etcd-addons-970414" [8daf5c22-02d4-44e0-8a5c-0d5b9c0cd7b5] Running
	I0829 18:08:00.527490   33471 system_pods.go:89] "kindnet-95zg6" [612be856-b5ad-4571-9908-168f86f5b273] Running
	I0829 18:08:00.527493   33471 system_pods.go:89] "kube-apiserver-addons-970414" [549d4f3b-086e-40f7-9b7a-513220af52cd] Running
	I0829 18:08:00.527496   33471 system_pods.go:89] "kube-controller-manager-addons-970414" [00d3410f-773e-471f-9716-7fc678c6f5a3] Running
	I0829 18:08:00.527500   33471 system_pods.go:89] "kube-ingress-dns-minikube" [6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1] Running
	I0829 18:08:00.527503   33471 system_pods.go:89] "kube-proxy-mwgq4" [39ef4c84-6d42-40f2-9eb2-af13d2c9a233] Running
	I0829 18:08:00.527507   33471 system_pods.go:89] "kube-scheduler-addons-970414" [75453275-6d16-4fc0-944d-d30987bfccb2] Running
	I0829 18:08:00.527510   33471 system_pods.go:89] "metrics-server-8988944d9-jss9n" [a866f6c5-ff40-4062-986b-ddae9310879c] Running
	I0829 18:08:00.527514   33471 system_pods.go:89] "nvidia-device-plugin-daemonset-njmrn" [5c975a82-28c1-431d-b4e4-b89312486f53] Running
	I0829 18:08:00.527520   33471 system_pods.go:89] "registry-6fb4cdfc84-srp9d" [a6e6445c-947b-4527-a5b7-e1710ec0b292] Running
	I0829 18:08:00.527523   33471 system_pods.go:89] "registry-proxy-56c89" [c9c1a8d7-92a0-458c-a4fa-4271bfd8f736] Running
	I0829 18:08:00.527526   33471 system_pods.go:89] "snapshot-controller-56fcc65765-c9pzh" [b3e9483b-e20c-4b8d-b5b4-53940d1f7621] Running
	I0829 18:08:00.527532   33471 system_pods.go:89] "snapshot-controller-56fcc65765-w7vbq" [0a038557-f899-4971-87c0-4a476ae40ff9] Running
	I0829 18:08:00.527535   33471 system_pods.go:89] "storage-provisioner" [7cffe50e-abe7-4d9c-9c04-88e86ad1ffb9] Running
	I0829 18:08:00.527538   33471 system_pods.go:89] "tiller-deploy-b48cc5f79-h8shr" [53f4571a-d63e-4721-aa85-b44922772189] Running
	I0829 18:08:00.527546   33471 system_pods.go:126] duration metric: took 8.752911ms to wait for k8s-apps to be running ...
	I0829 18:08:00.527554   33471 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:08:00.527594   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:08:00.539104   33471 system_svc.go:56] duration metric: took 11.540627ms WaitForService to wait for kubelet
	I0829 18:08:00.539136   33471 kubeadm.go:582] duration metric: took 1m49.312445201s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:08:00.539157   33471 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:08:00.542184   33471 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0829 18:08:00.542215   33471 node_conditions.go:123] node cpu capacity is 8
	I0829 18:08:00.542232   33471 node_conditions.go:105] duration metric: took 3.069703ms to run NodePressure ...
	I0829 18:08:00.542247   33471 start.go:241] waiting for startup goroutines ...
	I0829 18:08:00.542258   33471 start.go:246] waiting for cluster config update ...
	I0829 18:08:00.542277   33471 start.go:255] writing updated cluster config ...
	I0829 18:08:00.542602   33471 ssh_runner.go:195] Run: rm -f paused
	I0829 18:08:00.589612   33471 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:08:00.591791   33471 out.go:177] * Done! kubectl is now configured to use "addons-970414" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:17:12 addons-970414 crio[1030]: time="2024-08-29 18:17:12.755807089Z" level=info msg="Started container" PID=10421 containerID=d37df5ee9d5b45fdb07648001e6a2d8070fc79faabafabc52e962fa59be59c0e description=default/test-local-path/busybox id=721f7a42-7fe7-43eb-8fc6-79e2d78b2759 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ccdeeb08026913a0b25a10d073aec8ed75ad7f8813f8ec5f2c433b30d95d7ced
	Aug 29 18:17:13 addons-970414 crio[1030]: time="2024-08-29 18:17:13.492109561Z" level=info msg="Stopping pod sandbox: cdd467414a75a606cebd5077af52c37a1d65906dc6c896271dbbe6b3a090db56" id=0b671cc2-2b72-4175-a4f9-887e0c7b8d3f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:17:13 addons-970414 crio[1030]: time="2024-08-29 18:17:13.492420505Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:cdd467414a75a606cebd5077af52c37a1d65906dc6c896271dbbe6b3a090db56 UID:1c3ccc50-1be9-4974-90c2-0eae5cbdb69d NetNS:/var/run/netns/0639df08-8425-4be0-a4fc-5d02f8b12a83 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 29 18:17:13 addons-970414 crio[1030]: time="2024-08-29 18:17:13.492588965Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Aug 29 18:17:13 addons-970414 crio[1030]: time="2024-08-29 18:17:13.527133684Z" level=info msg="Stopped pod sandbox: cdd467414a75a606cebd5077af52c37a1d65906dc6c896271dbbe6b3a090db56" id=0b671cc2-2b72-4175-a4f9-887e0c7b8d3f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.075721627Z" level=info msg="Stopping container: 7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5 (timeout: 30s)" id=382456ff-f64d-46d0-9cd9-713385e451d7 name=/runtime.v1.RuntimeService/StopContainer
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.081302027Z" level=info msg="Stopping container: bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483 (timeout: 30s)" id=0d907fa3-30e8-4b27-96b0-3102800eb92a name=/runtime.v1.RuntimeService/StopContainer
	Aug 29 18:17:14 addons-970414 conmon[3989]: conmon 7143d09d061dd20f3faf <ninfo>: container 4001 exited with status 2
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.146429685Z" level=info msg="Stopping pod sandbox: ccdeeb08026913a0b25a10d073aec8ed75ad7f8813f8ec5f2c433b30d95d7ced" id=c30d4ae5-5d96-4222-8809-65909911acd1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.146740094Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:ccdeeb08026913a0b25a10d073aec8ed75ad7f8813f8ec5f2c433b30d95d7ced UID:7bf4aad5-fbc6-491c-b7ab-f932d727e5b0 NetNS:/var/run/netns/c346e873-f450-406e-9240-e4323f04ef77 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.146897895Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.190451307Z" level=info msg="Stopped pod sandbox: ccdeeb08026913a0b25a10d073aec8ed75ad7f8813f8ec5f2c433b30d95d7ced" id=c30d4ae5-5d96-4222-8809-65909911acd1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.219342964Z" level=info msg="Stopped container 7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5: kube-system/registry-6fb4cdfc84-srp9d/registry" id=382456ff-f64d-46d0-9cd9-713385e451d7 name=/runtime.v1.RuntimeService/StopContainer
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.219953182Z" level=info msg="Stopping pod sandbox: 770da276006956ec27e0463d129e37a01a644d52909b6fd652f30ceadcfb09cd" id=668c76a2-0e97-442a-b82e-d1dbed63b54a name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.220238375Z" level=info msg="Got pod network &{Name:registry-6fb4cdfc84-srp9d Namespace:kube-system ID:770da276006956ec27e0463d129e37a01a644d52909b6fd652f30ceadcfb09cd UID:a6e6445c-947b-4527-a5b7-e1710ec0b292 NetNS:/var/run/netns/b0ae98b8-d46e-42d1-a98e-e779751b773c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.220400283Z" level=info msg="Deleting pod kube-system_registry-6fb4cdfc84-srp9d from CNI network \"kindnet\" (type=ptp)"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.223608170Z" level=info msg="Stopped container bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483: kube-system/registry-proxy-56c89/registry-proxy" id=0d907fa3-30e8-4b27-96b0-3102800eb92a name=/runtime.v1.RuntimeService/StopContainer
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.224088518Z" level=info msg="Stopping pod sandbox: 84b7e0f85d866506588bd8b5506b65d6b523e9bc231c0b7a6aa1ba48a9047052" id=a9078239-5d77-4901-b447-e880c394497a name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.248291853Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GFSZUBCFILYZJZYB - [0:0]\n:KUBE-HP-TF6246TSRTSPPYQG - [0:0]\n:KUBE-HP-GUBYUHQDBFJBIREF - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-cv22w_ingress-nginx_4e53ad5a-0419-423f-baf6-3ccfce3a4256_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-GFSZUBCFILYZJZYB\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-cv22w_ingress-nginx_4e53ad5a-0419-423f-baf6-3ccfce3a4256_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-GUBYUHQDBFJBIREF\n-A KUBE-HP-GFSZUBCFILYZJZYB -s 10.244.0.21/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-cv22w_ingress-nginx_4e53ad5a-0419-423f-baf6-3ccfce3a4256_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-GFSZUBCFILYZJZYB -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-cv22w_ingress-nginx_4e53ad5a-0419-423f-b
af6-3ccfce3a4256_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.21:443\n-A KUBE-HP-GUBYUHQDBFJBIREF -s 10.244.0.21/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-cv22w_ingress-nginx_4e53ad5a-0419-423f-baf6-3ccfce3a4256_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-GUBYUHQDBFJBIREF -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-cv22w_ingress-nginx_4e53ad5a-0419-423f-baf6-3ccfce3a4256_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.21:80\n-X KUBE-HP-TF6246TSRTSPPYQG\nCOMMIT\n"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.250862830Z" level=info msg="Closing host port tcp:5000"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.252476142Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.252634892Z" level=info msg="Got pod network &{Name:registry-proxy-56c89 Namespace:kube-system ID:84b7e0f85d866506588bd8b5506b65d6b523e9bc231c0b7a6aa1ba48a9047052 UID:c9c1a8d7-92a0-458c-a4fa-4271bfd8f736 NetNS:/var/run/netns/e8d845ea-909f-4945-b9d5-60ba133ba3d9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.252773299Z" level=info msg="Deleting pod kube-system_registry-proxy-56c89 from CNI network \"kindnet\" (type=ptp)"
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.270031282Z" level=info msg="Stopped pod sandbox: 770da276006956ec27e0463d129e37a01a644d52909b6fd652f30ceadcfb09cd" id=668c76a2-0e97-442a-b82e-d1dbed63b54a name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:17:14 addons-970414 crio[1030]: time="2024-08-29 18:17:14.298058898Z" level=info msg="Stopped pod sandbox: 84b7e0f85d866506588bd8b5506b65d6b523e9bc231c0b7a6aa1ba48a9047052" id=a9078239-5d77-4901-b447-e880c394497a name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	d37df5ee9d5b4       docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8                            2 seconds ago       Exited              busybox                    0                   ccdeeb0802691       test-local-path
	bb98377c01a6b       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                            6 seconds ago       Exited              helper-pod                 0                   42b00c291b58c       helper-pod-create-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42
	3d021da6ca851       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                9 seconds ago       Exited              helm-test                  0                   db5a790376214       helm-test
	03f63bd4b1c48       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              58 seconds ago      Running             nginx                      0                   4fa70648299cc       nginx
	87396c3a6a26a       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                 0                   410fabb8bf1a3       ingress-nginx-controller-bc57996ff-cv22w
	a19318a738251       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             9 minutes ago       Exited              patch                      3                   763e4aa04b031       ingress-nginx-admission-patch-c8fc7
	751a953e0230f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 10 minutes ago      Running             gcp-auth                   0                   a12fb4e4da859       gcp-auth-89d5ffd79-cj6cz
	bd31b61c84a17       gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4              10 minutes ago      Exited              registry-proxy             0                   84b7e0f85d866       registry-proxy-56c89
	178bb778ee85e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              create                     0                   cac81433fd37a       ingress-nginx-admission-create-hxp8v
	ac36f3f3323b5       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   77fab2e401483       nvidia-device-plugin-daemonset-njmrn
	56e6e68213244       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns       0                   21fd16f2bea64       kube-ingress-dns-minikube
	6888613b3e8ca       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        10 minutes ago      Running             metrics-server             0                   5c6d6ccdb7bd8       metrics-server-8988944d9-jss9n
	532cf0ac06e24       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               10 minutes ago      Running             cloud-spanner-emulator     0                   36576214189f3       cloud-spanner-emulator-769b77f747-zhn4j
	527d96a52b6f4       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              10 minutes ago      Running             yakd                       0                   495559fdabbd8       yakd-dashboard-67d98fc6b-b5hns
	83e00fe4fd127       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             10 minutes ago      Running             local-path-provisioner     0                   3d2d80cd09370       local-path-provisioner-86d989889c-lsxfx
	3a16651d14fd4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             10 minutes ago      Running             coredns                    0                   c991950d1479a       coredns-6f6b679f8f-jxrb9
	fc284d6f42abd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner        0                   1c77efb0d73c6       storage-provisioner
	fc407b261b55a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                           10 minutes ago      Running             kindnet-cni                0                   3a14aa7cbd5ba       kindnet-95zg6
	f3c75142fecd2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             11 minutes ago      Running             kube-proxy                 0                   6259dfbf37c5a       kube-proxy-mwgq4
	cb91925e81486       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             11 minutes ago      Running             kube-scheduler             0                   3af0a40f28992       kube-scheduler-addons-970414
	5034cc120442d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                       0                   989f4e8da94ea       etcd-addons-970414
	70642d5cd8ef0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             11 minutes ago      Running             kube-controller-manager    0                   740a72692bfef       kube-controller-manager-addons-970414
	b65cd62e3477a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             11 minutes ago      Running             kube-apiserver             0                   1be263bee45c2       kube-apiserver-addons-970414
	
	
	==> coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] <==
	[INFO] 10.244.0.19:41065 - 41812 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110073s
	[INFO] 10.244.0.19:33314 - 7978 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068562s
	[INFO] 10.244.0.19:33314 - 63253 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123722s
	[INFO] 10.244.0.19:33313 - 15372 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005052571s
	[INFO] 10.244.0.19:33313 - 8969 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005240532s
	[INFO] 10.244.0.19:56468 - 13948 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004476729s
	[INFO] 10.244.0.19:56468 - 34426 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005241684s
	[INFO] 10.244.0.19:36060 - 35696 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004520671s
	[INFO] 10.244.0.19:36060 - 15990 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004576926s
	[INFO] 10.244.0.19:44003 - 15478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079273s
	[INFO] 10.244.0.19:44003 - 29556 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115076s
	[INFO] 10.244.0.20:49487 - 52545 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147201s
	[INFO] 10.244.0.20:59535 - 5474 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000116485s
	[INFO] 10.244.0.20:51018 - 29008 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119345s
	[INFO] 10.244.0.20:51904 - 9903 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179576s
	[INFO] 10.244.0.20:44385 - 47503 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138771s
	[INFO] 10.244.0.20:53196 - 482 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137631s
	[INFO] 10.244.0.20:52299 - 24778 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005524264s
	[INFO] 10.244.0.20:56050 - 55826 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006091549s
	[INFO] 10.244.0.20:52775 - 61641 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004707679s
	[INFO] 10.244.0.20:52194 - 42579 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00473342s
	[INFO] 10.244.0.20:58349 - 16179 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004578594s
	[INFO] 10.244.0.20:59907 - 15287 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006119565s
	[INFO] 10.244.0.20:54560 - 33495 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00068612s
	[INFO] 10.244.0.20:50005 - 1476 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000775831s
	
	
	==> describe nodes <==
	Name:               addons-970414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-970414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-970414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-970414
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-970414
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:16:48 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:16:48 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:16:48 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:16:48 +0000   Thu, 29 Aug 2024 18:06:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-970414
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 f871f2a5cd3540f79b6c200227bc35ed
	  System UUID:                49e09a6c-969e-4bfb-9562-e1e953ad9e00
	  Boot ID:                    fb799716-ba24-44f3-8d84-c852ba38aeb7
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-zhn4j                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gcp-auth                    gcp-auth-89d5ffd79-cj6cz                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-cv22w                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-jxrb9                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-addons-970414                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-95zg6                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-970414                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-970414                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-mwgq4                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-970414                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-8988944d9-jss9n                                100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 nvidia-device-plugin-daemonset-njmrn                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42    0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	  local-path-storage          local-path-provisioner-86d989889c-lsxfx                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-b5hns                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-970414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-970414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-970414 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-970414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-970414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-970414 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-970414 event: Registered Node addons-970414 in Controller
	  Normal   NodeReady                10m                kubelet          Node addons-970414 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000692] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000895] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000853] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000677] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000668] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000729] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.580338] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.044213] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005611] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013638] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002516] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013312] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.261359] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 18:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +1.032106] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +2.011848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +4.223585] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +8.191236] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[ +16.126426] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	
	
	==> etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] <==
	{"level":"warn","ts":"2024-08-29T18:06:14.860272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.829512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-29T18:06:14.862833Z","caller":"traceutil/trace.go:171","msg":"trace[1886778643] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:447; }","duration":"113.399296ms","start":"2024-08-29T18:06:14.749412Z","end":"2024-08-29T18:06:14.862811Z","steps":["trace[1886778643] 'agreement among raft nodes before linearized reading'  (duration: 110.800471ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:14.865475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.235889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:14.865581Z","caller":"traceutil/trace.go:171","msg":"trace[783420174] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:456; }","duration":"104.355912ms","start":"2024-08-29T18:06:14.761212Z","end":"2024-08-29T18:06:14.865567Z","steps":["trace[783420174] 'agreement among raft nodes before linearized reading'  (duration: 104.199882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:14.866155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.413882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:14.866237Z","caller":"traceutil/trace.go:171","msg":"trace[751535580] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:456; }","duration":"101.515166ms","start":"2024-08-29T18:06:14.764713Z","end":"2024-08-29T18:06:14.866229Z","steps":["trace[751535580] 'agreement among raft nodes before linearized reading'  (duration: 101.396746ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.945390Z","caller":"traceutil/trace.go:171","msg":"trace[1123768633] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:521; }","duration":"176.463619ms","start":"2024-08-29T18:06:15.768910Z","end":"2024-08-29T18:06:15.945374Z","steps":["trace[1123768633] 'read index received'  (duration: 77.240649ms)","trace[1123768633] 'applied index is now lower than readState.Index'  (duration: 99.222386ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:06:15.945619Z","caller":"traceutil/trace.go:171","msg":"trace[1692666756] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"191.612828ms","start":"2024-08-29T18:06:15.753992Z","end":"2024-08-29T18:06:15.945605Z","steps":["trace[1692666756] 'process raft request'  (duration: 92.148406ms)","trace[1692666756] 'compare'  (duration: 98.998238ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:06:15.945833Z","caller":"traceutil/trace.go:171","msg":"trace[866514615] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"181.389131ms","start":"2024-08-29T18:06:15.764436Z","end":"2024-08-29T18:06:15.945825Z","steps":["trace[866514615] 'process raft request'  (duration: 180.806444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:15.946042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.150959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:15.946098Z","caller":"traceutil/trace.go:171","msg":"trace[2012632869] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:515; }","duration":"192.218501ms","start":"2024-08-29T18:06:15.753869Z","end":"2024-08-29T18:06:15.946088Z","steps":["trace[2012632869] 'agreement among raft nodes before linearized reading'  (duration: 192.106939ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946172Z","caller":"traceutil/trace.go:171","msg":"trace[142374409] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"101.01965ms","start":"2024-08-29T18:06:15.845144Z","end":"2024-08-29T18:06:15.946163Z","steps":["trace[142374409] 'process raft request'  (duration: 100.171837ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946262Z","caller":"traceutil/trace.go:171","msg":"trace[1373251369] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"101.10566ms","start":"2024-08-29T18:06:15.845146Z","end":"2024-08-29T18:06:15.946252Z","steps":["trace[1373251369] 'process raft request'  (duration: 100.19817ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946280Z","caller":"traceutil/trace.go:171","msg":"trace[650896772] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"181.671652ms","start":"2024-08-29T18:06:15.764601Z","end":"2024-08-29T18:06:15.946273Z","steps":["trace[650896772] 'process raft request'  (duration: 180.68021ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:15.947176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.060944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/local-path\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:15.947209Z","caller":"traceutil/trace.go:171","msg":"trace[2050858094] range","detail":"{range_begin:/registry/storageclasses/local-path; range_end:; response_count:0; response_revision:518; }","duration":"102.103563ms","start":"2024-08-29T18:06:15.845096Z","end":"2024-08-29T18:06:15.947200Z","steps":["trace[2050858094] 'agreement among raft nodes before linearized reading'  (duration: 101.928133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:09.800169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.050132ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031540939107167 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gadget/gadget-xpbfc\" mod_revision:1165 > success:<request_put:<key:\"/registry/pods/gadget/gadget-xpbfc\" value_size:12390 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-xpbfc\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-29T18:07:09.800248Z","caller":"traceutil/trace.go:171","msg":"trace[1408882006] linearizableReadLoop","detail":"{readStateIndex:1206; appliedIndex:1205; }","duration":"133.531974ms","start":"2024-08-29T18:07:09.666705Z","end":"2024-08-29T18:07:09.800237Z","steps":["trace[1408882006] 'read index received'  (duration: 19.946533ms)","trace[1408882006] 'applied index is now lower than readState.Index'  (duration: 113.584532ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:07:09.800310Z","caller":"traceutil/trace.go:171","msg":"trace[645338846] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"199.150213ms","start":"2024-08-29T18:07:09.601149Z","end":"2024-08-29T18:07:09.800300Z","steps":["trace[645338846] 'process raft request'  (duration: 85.48217ms)","trace[645338846] 'compare'  (duration: 112.96922ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:07:09.800446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.733421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-proxy-56c89.17f0453e2283edaa\" ","response":"range_response_count:1 size:811"}
	{"level":"info","ts":"2024-08-29T18:07:09.800570Z","caller":"traceutil/trace.go:171","msg":"trace[1967698203] range","detail":"{range_begin:/registry/events/kube-system/registry-proxy-56c89.17f0453e2283edaa; range_end:; response_count:1; response_revision:1175; }","duration":"133.858756ms","start":"2024-08-29T18:07:09.666695Z","end":"2024-08-29T18:07:09.800554Z","steps":["trace[1967698203] 'agreement among raft nodes before linearized reading'  (duration: 133.655669ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:32.539822Z","caller":"traceutil/trace.go:171","msg":"trace[474774062] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"116.907065ms","start":"2024-08-29T18:07:32.422893Z","end":"2024-08-29T18:07:32.539801Z","steps":["trace[474774062] 'process raft request'  (duration: 116.785483ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:02.407524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1637}
	{"level":"info","ts":"2024-08-29T18:16:02.431918Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1637,"took":"23.970648ms","hash":2632862633,"current-db-size-bytes":6815744,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":3559424,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-08-29T18:16:02.431962Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2632862633,"revision":1637,"compact-revision":-1}
	
	
	==> gcp-auth [751a953e0230f7226fd0d5854c1b2e02172545fd27536cb15928df5e0e27c66c] <==
	2024/08/29 18:07:11 GCP Auth Webhook started!
	2024/08/29 18:08:00 Ready to marshal response ...
	2024/08/29 18:08:00 Ready to write response ...
	2024/08/29 18:08:00 Ready to marshal response ...
	2024/08/29 18:08:00 Ready to write response ...
	2024/08/29 18:08:00 Ready to marshal response ...
	2024/08/29 18:08:00 Ready to write response ...
	2024/08/29 18:16:13 Ready to marshal response ...
	2024/08/29 18:16:13 Ready to write response ...
	2024/08/29 18:16:14 Ready to marshal response ...
	2024/08/29 18:16:14 Ready to write response ...
	2024/08/29 18:16:23 Ready to marshal response ...
	2024/08/29 18:16:23 Ready to write response ...
	2024/08/29 18:16:42 Ready to marshal response ...
	2024/08/29 18:16:42 Ready to write response ...
	2024/08/29 18:17:04 Ready to marshal response ...
	2024/08/29 18:17:04 Ready to write response ...
	2024/08/29 18:17:07 Ready to marshal response ...
	2024/08/29 18:17:07 Ready to write response ...
	2024/08/29 18:17:07 Ready to marshal response ...
	2024/08/29 18:17:07 Ready to write response ...
	2024/08/29 18:17:15 Ready to marshal response ...
	2024/08/29 18:17:15 Ready to write response ...
	
	
	==> kernel <==
	 18:17:15 up  1:59,  0 users,  load average: 0.35, 0.39, 0.36
	Linux addons-970414 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] <==
	I0829 18:15:09.547154       1 main.go:299] handling current node
	I0829 18:15:19.546759       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:15:19.546792       1 main.go:299] handling current node
	I0829 18:15:29.551022       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:15:29.551058       1 main.go:299] handling current node
	I0829 18:15:39.546074       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:15:39.546120       1 main.go:299] handling current node
	I0829 18:15:49.555470       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:15:49.555502       1 main.go:299] handling current node
	I0829 18:15:59.554756       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:15:59.554788       1 main.go:299] handling current node
	I0829 18:16:09.546930       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:09.546964       1 main.go:299] handling current node
	I0829 18:16:19.546083       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:19.546122       1 main.go:299] handling current node
	I0829 18:16:29.546072       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:29.546103       1 main.go:299] handling current node
	I0829 18:16:39.547936       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:39.547968       1 main.go:299] handling current node
	I0829 18:16:49.547083       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:49.547113       1 main.go:299] handling current node
	I0829 18:16:59.546493       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:59.546524       1 main.go:299] handling current node
	I0829 18:17:09.546089       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:17:09.546135       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] <==
	E0829 18:07:50.127957       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 18:07:50.129455       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.191.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.191.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.191.20:443: connect: connection refused" logger="UnhandledError"
	I0829 18:07:50.162059       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 18:16:08.739816       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:16:09.755954       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:16:14.377413       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:16:14.646684       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.164.80"}
	I0829 18:16:33.632960       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 18:16:58.558923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.558971       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.571597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.645767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.645925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.645986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.653426       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.653571       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.671184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.671217       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:16:59.646907       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:16:59.671973       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0829 18:16:59.768892       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0829 18:17:05.584431       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.27:50460: read: connection reset by peer
	
	
	==> kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] <==
	E0829 18:17:00.599557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:00.905377       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:00.905413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:00.982492       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:00.982525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:02.782779       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:02.782817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:02.878319       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:02.878359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:03.175357       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:03.175392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:06.031240       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:06.031278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:17:07.555847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="5.356µs"
	W0829 18:17:08.821727       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:08.821764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:08.964195       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:08.964242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:17:10.936715       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0829 18:17:10.936750       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 18:17:11.231326       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0829 18:17:11.231365       1 shared_informer.go:320] Caches are synced for garbage collector
	W0829 18:17:13.654395       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:13.654432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:17:14.066801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="6.586µs"
	
	
	==> kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] <==
	I0829 18:06:14.059690       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:06:15.156032       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0829 18:06:15.158564       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:15.952517       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:06:15.952637       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:15.966318       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:15.967679       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:15.967714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:15.969000       1 config.go:197] "Starting service config controller"
	I0829 18:06:15.969038       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:15.969060       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:15.969064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:15.969485       1 config.go:326] "Starting node config controller"
	I0829 18:06:15.969491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:16.146707       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:06:16.150259       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:16.150276       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] <==
	W0829 18:06:03.754191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:03.755458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:03.755510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:03.755547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:06:03.755582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:03.755613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.755907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0829 18:06:03.755927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0829 18:06:03.755940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:03.755944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:03.755960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:03.755964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.755928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:03.756013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.756050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:03.756071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:04.767168       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:04.767208       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:06:04.816545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:04.816611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0829 18:06:06.651649       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:17:14 addons-970414 kubelet[1626]: I0829 18:17:14.401810    1626 reconciler_common.go:288] "Volume detached for volume \"pvc-ca648e25-cf9d-4c60-9189-df073bc95d42\" (UniqueName: \"kubernetes.io/host-path/7bf4aad5-fbc6-491c-b7ab-f932d727e5b0-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:17:14 addons-970414 kubelet[1626]: I0829 18:17:14.401835    1626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rn7sf\" (UniqueName: \"kubernetes.io/projected/7bf4aad5-fbc6-491c-b7ab-f932d727e5b0-kube-api-access-rn7sf\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:17:14 addons-970414 kubelet[1626]: I0829 18:17:14.403496    1626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c1a8d7-92a0-458c-a4fa-4271bfd8f736-kube-api-access-jfb6j" (OuterVolumeSpecName: "kube-api-access-jfb6j") pod "c9c1a8d7-92a0-458c-a4fa-4271bfd8f736" (UID: "c9c1a8d7-92a0-458c-a4fa-4271bfd8f736"). InnerVolumeSpecName "kube-api-access-jfb6j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:17:14 addons-970414 kubelet[1626]: I0829 18:17:14.403608    1626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e6445c-947b-4527-a5b7-e1710ec0b292-kube-api-access-5trbs" (OuterVolumeSpecName: "kube-api-access-5trbs") pod "a6e6445c-947b-4527-a5b7-e1710ec0b292" (UID: "a6e6445c-947b-4527-a5b7-e1710ec0b292"). InnerVolumeSpecName "kube-api-access-5trbs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:17:14 addons-970414 kubelet[1626]: I0829 18:17:14.502076    1626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5trbs\" (UniqueName: \"kubernetes.io/projected/a6e6445c-947b-4527-a5b7-e1710ec0b292-kube-api-access-5trbs\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:17:14 addons-970414 kubelet[1626]: I0829 18:17:14.502109    1626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jfb6j\" (UniqueName: \"kubernetes.io/projected/c9c1a8d7-92a0-458c-a4fa-4271bfd8f736-kube-api-access-jfb6j\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.149589    1626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccdeeb08026913a0b25a10d073aec8ed75ad7f8813f8ec5f2c433b30d95d7ced"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.150902    1626 scope.go:117] "RemoveContainer" containerID="7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.165552    1626 scope.go:117] "RemoveContainer" containerID="7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: E0829 18:17:15.165920    1626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5\": container with ID starting with 7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5 not found: ID does not exist" containerID="7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.165961    1626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5"} err="failed to get container status \"7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5\": rpc error: code = NotFound desc = could not find container \"7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5\": container with ID starting with 7143d09d061dd20f3faf30772f6e9a2f46a2ee2d1d6a9f850910944dc6e14fd5 not found: ID does not exist"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.165993    1626 scope.go:117] "RemoveContainer" containerID="bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.183715    1626 scope.go:117] "RemoveContainer" containerID="bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: E0829 18:17:15.184052    1626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483\": container with ID starting with bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483 not found: ID does not exist" containerID="bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.184086    1626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483"} err="failed to get container status \"bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483\": rpc error: code = NotFound desc = could not find container \"bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483\": container with ID starting with bd31b61c84a177669152c2ee7be7b01fe560a12e757f9859984b249ba30e9483 not found: ID does not exist"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: E0829 18:17:15.351228    1626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6e6445c-947b-4527-a5b7-e1710ec0b292" containerName="registry"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: E0829 18:17:15.351262    1626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bf4aad5-fbc6-491c-b7ab-f932d727e5b0" containerName="busybox"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: E0829 18:17:15.351273    1626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9c1a8d7-92a0-458c-a4fa-4271bfd8f736" containerName="registry-proxy"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.351320    1626 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c1a8d7-92a0-458c-a4fa-4271bfd8f736" containerName="registry-proxy"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.351331    1626 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e6445c-947b-4527-a5b7-e1710ec0b292" containerName="registry"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.351339    1626 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf4aad5-fbc6-491c-b7ab-f932d727e5b0" containerName="busybox"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.409133    1626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/4fdef135-2a84-4893-bea3-3990a3b7ea83-script\") pod \"helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42\" (UID: \"4fdef135-2a84-4893-bea3-3990a3b7ea83\") " pod="local-path-storage/helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.409189    1626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/4fdef135-2a84-4893-bea3-3990a3b7ea83-data\") pod \"helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42\" (UID: \"4fdef135-2a84-4893-bea3-3990a3b7ea83\") " pod="local-path-storage/helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.409291    1626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4fdef135-2a84-4893-bea3-3990a3b7ea83-gcp-creds\") pod \"helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42\" (UID: \"4fdef135-2a84-4893-bea3-3990a3b7ea83\") " pod="local-path-storage/helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42"
	Aug 29 18:17:15 addons-970414 kubelet[1626]: I0829 18:17:15.409359    1626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brw22\" (UniqueName: \"kubernetes.io/projected/4fdef135-2a84-4893-bea3-3990a3b7ea83-kube-api-access-brw22\") pod \"helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42\" (UID: \"4fdef135-2a84-4893-bea3-3990a3b7ea83\") " pod="local-path-storage/helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42"
	
	
	==> storage-provisioner [fc284d6f42abd5ee85cea3d425a167f1747f738b8330187c43ca42227f77adb7] <==
	I0829 18:06:30.446216       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:06:30.457153       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:06:30.457203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:06:30.464533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:06:30.464681       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278!
	I0829 18:06:30.464679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f572a30f-1e05-4d7e-a66a-2b263d676001", APIVersion:"v1", ResourceVersion:"937", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278 became leader
	I0829 18:06:30.565419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-970414 -n addons-970414
helpers_test.go:261: (dbg) Run:  kubectl --context addons-970414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-hxp8v ingress-nginx-admission-patch-c8fc7 helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-970414 describe pod busybox ingress-nginx-admission-create-hxp8v ingress-nginx-admission-patch-c8fc7 helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-970414 describe pod busybox ingress-nginx-admission-create-hxp8v ingress-nginx-admission-patch-c8fc7 helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42: exit status 1 (63.692013ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-970414/192.168.49.2
	Start Time:       Thu, 29 Aug 2024 18:08:00 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9wnnt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9wnnt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-970414
	  Normal   Pulling    7m48s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hxp8v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-c8fc7" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-970414 describe pod busybox ingress-nginx-admission-create-hxp8v ingress-nginx-admission-patch-c8fc7 helper-pod-delete-pvc-ca648e25-cf9d-4c60-9189-df073bc95d42: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (149.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-970414 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-970414 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-970414 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [533abc7e-dca7-42c9-86cb-5cfb902fb94c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [533abc7e-dca7-42c9-86cb-5cfb902fb94c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004083423s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-970414 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.379497233s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-970414 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 addons disable ingress --alsologtostderr -v=1: (7.661425827s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-970414
helpers_test.go:235: (dbg) docker inspect addons-970414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f",
	        "Created": "2024-08-29T18:05:50.989469594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-29T18:05:51.114817177Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:33319d96a2f78fe466b6d8cbd88671515fca2b1eded3ce0b5f6d545b670a78ac",
	        "ResolvConfPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/hosts",
	        "LogPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f-json.log",
	        "Name": "/addons-970414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-970414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-970414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef-init/diff:/var/lib/docker/overlay2/05fc462985fa2f024c01de3a02bf0ead4c06c5840250f2e5986b9e50a75da4c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-970414",
	                "Source": "/var/lib/docker/volumes/addons-970414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-970414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-970414",
	                "name.minikube.sigs.k8s.io": "addons-970414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "978d127d7df61acbbd8935def9a64eff58519190d009a49d3457d2ba97b12a1f",
	            "SandboxKey": "/var/run/docker/netns/978d127d7df6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-970414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c2cbcee4e25a4578dadcd50e3b7deda46b3aa188961837c3614b63db18a2f3b7",
	                    "EndpointID": "4a8075a86adc8f2be9df3038096489cf43023ca173ac09f522f3ebac0bd13872",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-970414",
	                        "41a3cf6921c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-970414 -n addons-970414
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 logs -n 25: (1.105499576s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-125708                                                                     | download-only-125708   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                                                                          | download-docker-806390 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | download-docker-806390                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-806390                                                                   | download-docker-806390 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-708315   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | binary-mirror-708315                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45431                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-708315                                                                     | binary-mirror-708315   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-970414 --wait=true                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-970414 ssh curl -s                                                                   | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-970414 addons                                                                        | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons                                                                        | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-970414 ip                                                                            | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-970414 ssh cat                                                                       | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | /opt/local-path-provisioner/pvc-ca648e25-cf9d-4c60-9189-df073bc95d42_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-970414                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-970414                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-970414 ip                                                                            | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:27.001060   33471 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:27.001195   33471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.001206   33471 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:27.001213   33471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.001566   33471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:05:27.002146   33471 out.go:352] Setting JSON to false
	I0829 18:05:27.002926   33471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6478,"bootTime":1724948249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:27.002981   33471 start.go:139] virtualization: kvm guest
	I0829 18:05:27.004975   33471 out.go:177] * [addons-970414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:27.006205   33471 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:05:27.006225   33471 notify.go:220] Checking for updates...
	I0829 18:05:27.008297   33471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:27.009428   33471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:05:27.010459   33471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:05:27.011630   33471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:05:27.012666   33471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:05:27.013855   33471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:27.034066   33471 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:27.034178   33471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:27.081939   33471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:05:27.073820971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:27.082037   33471 docker.go:307] overlay module found
	I0829 18:05:27.083769   33471 out.go:177] * Using the docker driver based on user configuration
	I0829 18:05:27.084831   33471 start.go:297] selected driver: docker
	I0829 18:05:27.084843   33471 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:27.084856   33471 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:05:27.085566   33471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:27.128935   33471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:05:27.120299564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:27.129150   33471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:27.129407   33471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:05:27.130954   33471 out.go:177] * Using Docker driver with root privileges
	I0829 18:05:27.132457   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:05:27.132474   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:05:27.132483   33471 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 18:05:27.132551   33471 start.go:340] cluster config:
	{Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:27.134145   33471 out.go:177] * Starting "addons-970414" primary control-plane node in "addons-970414" cluster
	I0829 18:05:27.135511   33471 cache.go:121] Beginning downloading kic base image for docker with crio
	I0829 18:05:27.137027   33471 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:05:27.138262   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:27.138302   33471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:05:27.138309   33471 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:27.138353   33471 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:05:27.138388   33471 preload.go:172] Found /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:05:27.138398   33471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:05:27.138727   33471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json ...
	I0829 18:05:27.138747   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json: {Name:mke2d7298c74312a04e88e452c7a2b0ef6f2c5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:27.153622   33471 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:27.153732   33471 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:05:27.153749   33471 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0829 18:05:27.153754   33471 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0829 18:05:27.153762   33471 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0829 18:05:27.153769   33471 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0829 18:05:38.808665   33471 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0829 18:05:38.808699   33471 cache.go:194] Successfully downloaded all kic artifacts
	I0829 18:05:38.808727   33471 start.go:360] acquireMachinesLock for addons-970414: {Name:mkb69a163e0d8e2549bad474fa195b7110791498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:38.808834   33471 start.go:364] duration metric: took 89.086µs to acquireMachinesLock for "addons-970414"
	I0829 18:05:38.808859   33471 start.go:93] Provisioning new machine with config: &{Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:05:38.808941   33471 start.go:125] createHost starting for "" (driver="docker")
	I0829 18:05:38.810903   33471 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0829 18:05:38.811159   33471 start.go:159] libmachine.API.Create for "addons-970414" (driver="docker")
	I0829 18:05:38.811196   33471 client.go:168] LocalClient.Create starting
	I0829 18:05:38.811308   33471 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem
	I0829 18:05:38.888624   33471 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem
	I0829 18:05:39.225744   33471 cli_runner.go:164] Run: docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0829 18:05:39.242445   33471 cli_runner.go:211] docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0829 18:05:39.242507   33471 network_create.go:284] running [docker network inspect addons-970414] to gather additional debugging logs...
	I0829 18:05:39.242525   33471 cli_runner.go:164] Run: docker network inspect addons-970414
	W0829 18:05:39.257100   33471 cli_runner.go:211] docker network inspect addons-970414 returned with exit code 1
	I0829 18:05:39.257130   33471 network_create.go:287] error running [docker network inspect addons-970414]: docker network inspect addons-970414: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-970414 not found
	I0829 18:05:39.257147   33471 network_create.go:289] output of [docker network inspect addons-970414]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-970414 not found
	
	** /stderr **
	I0829 18:05:39.257238   33471 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:05:39.272618   33471 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a7c8d0}
	I0829 18:05:39.272664   33471 network_create.go:124] attempt to create docker network addons-970414 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0829 18:05:39.272707   33471 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-970414 addons-970414
	I0829 18:05:39.331357   33471 network_create.go:108] docker network addons-970414 192.168.49.0/24 created
	I0829 18:05:39.331388   33471 kic.go:121] calculated static IP "192.168.49.2" for the "addons-970414" container
	I0829 18:05:39.331435   33471 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0829 18:05:39.346156   33471 cli_runner.go:164] Run: docker volume create addons-970414 --label name.minikube.sigs.k8s.io=addons-970414 --label created_by.minikube.sigs.k8s.io=true
	I0829 18:05:39.361798   33471 oci.go:103] Successfully created a docker volume addons-970414
	I0829 18:05:39.361884   33471 cli_runner.go:164] Run: docker run --rm --name addons-970414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --entrypoint /usr/bin/test -v addons-970414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0829 18:05:46.571826   33471 cli_runner.go:217] Completed: docker run --rm --name addons-970414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --entrypoint /usr/bin/test -v addons-970414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (7.209903568s)
	I0829 18:05:46.571853   33471 oci.go:107] Successfully prepared a docker volume addons-970414
	I0829 18:05:46.571874   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:46.571894   33471 kic.go:194] Starting extracting preloaded images to volume ...
	I0829 18:05:46.571970   33471 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-970414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0829 18:05:50.930587   33471 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-970414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (4.358576097s)
	I0829 18:05:50.930618   33471 kic.go:203] duration metric: took 4.358721922s to extract preloaded images to volume ...
	W0829 18:05:50.930753   33471 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0829 18:05:50.930875   33471 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0829 18:05:50.975554   33471 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-970414 --name addons-970414 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-970414 --network addons-970414 --ip 192.168.49.2 --volume addons-970414:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0829 18:05:51.268886   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Running}}
	I0829 18:05:51.285523   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.304601   33471 cli_runner.go:164] Run: docker exec addons-970414 stat /var/lib/dpkg/alternatives/iptables
	I0829 18:05:51.347960   33471 oci.go:144] the created container "addons-970414" has a running status.
	I0829 18:05:51.347988   33471 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa...
	I0829 18:05:51.440365   33471 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0829 18:05:51.459363   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.476716   33471 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0829 18:05:51.476740   33471 kic_runner.go:114] Args: [docker exec --privileged addons-970414 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0829 18:05:51.517330   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.534066   33471 machine.go:93] provisionDockerMachine start ...
	I0829 18:05:51.534151   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:51.554839   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:51.555038   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:51.555054   33471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:05:51.555753   33471 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39654->127.0.0.1:32768: read: connection reset by peer
	I0829 18:05:54.683865   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-970414
	
	I0829 18:05:54.683900   33471 ubuntu.go:169] provisioning hostname "addons-970414"
	I0829 18:05:54.683958   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:54.699445   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:54.699631   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:54.699643   33471 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-970414 && echo "addons-970414" | sudo tee /etc/hostname
	I0829 18:05:54.830897   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-970414
	
	I0829 18:05:54.830993   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:54.847116   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:54.847297   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:54.847323   33471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-970414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-970414/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-970414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:05:54.972384   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:05:54.972411   33471 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19531-25336/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-25336/.minikube}
	I0829 18:05:54.972428   33471 ubuntu.go:177] setting up certificates
	I0829 18:05:54.972440   33471 provision.go:84] configureAuth start
	I0829 18:05:54.972492   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:54.988585   33471 provision.go:143] copyHostCerts
	I0829 18:05:54.988673   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/ca.pem (1078 bytes)
	I0829 18:05:54.988829   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/cert.pem (1123 bytes)
	I0829 18:05:54.988951   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/key.pem (1679 bytes)
	I0829 18:05:54.989024   33471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem org=jenkins.addons-970414 san=[127.0.0.1 192.168.49.2 addons-970414 localhost minikube]
	I0829 18:05:55.147597   33471 provision.go:177] copyRemoteCerts
	I0829 18:05:55.147661   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:05:55.147709   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.165771   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.256506   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:05:55.276475   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:05:55.296322   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:05:55.315859   33471 provision.go:87] duration metric: took 343.406508ms to configureAuth
	I0829 18:05:55.315880   33471 ubuntu.go:193] setting minikube options for container-runtime
	I0829 18:05:55.316058   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:05:55.316165   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.332100   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:55.332269   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:55.332292   33471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:05:55.536223   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:05:55.536246   33471 machine.go:96] duration metric: took 4.002156332s to provisionDockerMachine
	I0829 18:05:55.536256   33471 client.go:171] duration metric: took 16.725048882s to LocalClient.Create
	I0829 18:05:55.536279   33471 start.go:167] duration metric: took 16.725121559s to libmachine.API.Create "addons-970414"
	I0829 18:05:55.536289   33471 start.go:293] postStartSetup for "addons-970414" (driver="docker")
	I0829 18:05:55.536302   33471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:05:55.536358   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:05:55.536404   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.552022   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.640805   33471 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:05:55.643619   33471 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:05:55.643648   33471 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:05:55.643657   33471 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:05:55.643662   33471 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0829 18:05:55.643672   33471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-25336/.minikube/addons for local assets ...
	I0829 18:05:55.643725   33471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-25336/.minikube/files for local assets ...
	I0829 18:05:55.643751   33471 start.go:296] duration metric: took 107.457009ms for postStartSetup
	I0829 18:05:55.643994   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:55.660003   33471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json ...
	I0829 18:05:55.660247   33471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:05:55.660293   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.675451   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.760973   33471 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0829 18:05:55.764592   33471 start.go:128] duration metric: took 16.955640874s to createHost
	I0829 18:05:55.764614   33471 start.go:83] releasing machines lock for "addons-970414", held for 16.955766323s
	I0829 18:05:55.764673   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:55.780103   33471 ssh_runner.go:195] Run: cat /version.json
	I0829 18:05:55.780144   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.780194   33471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:05:55.780253   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.797444   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.797887   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.953349   33471 ssh_runner.go:195] Run: systemctl --version
	I0829 18:05:55.957132   33471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:05:56.091366   33471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:05:56.095285   33471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:05:56.111209   33471 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0829 18:05:56.111281   33471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:05:56.134706   33471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0829 18:05:56.134730   33471 start.go:495] detecting cgroup driver to use...
	I0829 18:05:56.134763   33471 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:05:56.134812   33471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:05:56.147385   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:05:56.156613   33471 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:05:56.156666   33471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:05:56.168092   33471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:05:56.179938   33471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:05:56.252028   33471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:05:56.327750   33471 docker.go:233] disabling docker service ...
	I0829 18:05:56.327807   33471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:05:56.343956   33471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:05:56.353288   33471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:05:56.427251   33471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:05:56.508717   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:05:56.518265   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:05:56.531476   33471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:05:56.531549   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.539410   33471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:05:56.539458   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.547577   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.555487   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.563452   33471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:05:56.570823   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.578587   33471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.591295   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.599128   33471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:05:56.605733   33471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:05:56.612545   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:05:56.686246   33471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:05:56.769888   33471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:05:56.769948   33471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:05:56.772991   33471 start.go:563] Will wait 60s for crictl version
	I0829 18:05:56.773031   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:05:56.775690   33471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:05:56.808215   33471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0829 18:05:56.808328   33471 ssh_runner.go:195] Run: crio --version
	I0829 18:05:56.840217   33471 ssh_runner.go:195] Run: crio --version
	I0829 18:05:56.872925   33471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0829 18:05:56.874122   33471 cli_runner.go:164] Run: docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:05:56.889469   33471 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0829 18:05:56.892591   33471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:05:56.901877   33471 kubeadm.go:883] updating cluster {Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:05:56.902001   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:56.902058   33471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:05:56.960945   33471 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:05:56.960966   33471 crio.go:433] Images already preloaded, skipping extraction
	I0829 18:05:56.961005   33471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:05:56.996565   33471 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:05:56.996586   33471 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:05:56.996594   33471 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0829 18:05:56.996695   33471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-970414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:05:56.996788   33471 ssh_runner.go:195] Run: crio config
	I0829 18:05:57.034951   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:05:57.034976   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:05:57.035004   33471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:05:57.035037   33471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-970414 NodeName:addons-970414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:05:57.035200   33471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-970414"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:05:57.035264   33471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:05:57.043209   33471 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:05:57.043270   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:05:57.050815   33471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0829 18:05:57.065626   33471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:05:57.080858   33471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0829 18:05:57.095282   33471 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0829 18:05:57.098211   33471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:05:57.107337   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:05:57.174389   33471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:05:57.185656   33471 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414 for IP: 192.168.49.2
	I0829 18:05:57.185680   33471 certs.go:194] generating shared ca certs ...
	I0829 18:05:57.185701   33471 certs.go:226] acquiring lock for ca certs: {Name:mk67594a2aeddd90511e83e94fdec27741c5c194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.185831   33471 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key
	I0829 18:05:57.302579   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt ...
	I0829 18:05:57.302605   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt: {Name:mk68fcaae893468c94d7a84507010792fe808d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.302749   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key ...
	I0829 18:05:57.302759   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key: {Name:mk3ae49953961c47a1211facb56e8bc731cb5d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.302828   33471 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key
	I0829 18:05:57.397161   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt ...
	I0829 18:05:57.397188   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt: {Name:mkdea41367fabcd2965e87aed60d5a189212f9be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.397327   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key ...
	I0829 18:05:57.397337   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key: {Name:mk92e8ff155ca7dda7fa018998615e51c8a854aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.397397   33471 certs.go:256] generating profile certs ...
	I0829 18:05:57.397452   33471 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key
	I0829 18:05:57.397465   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt with IP's: []
	I0829 18:05:57.456687   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt ...
	I0829 18:05:57.456714   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: {Name:mkca0def83df75bdcbf967a5612ca78646681086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.456865   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key ...
	I0829 18:05:57.456879   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key: {Name:mk7a68ec7addac3a4cb5327ed442f621166ad28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.456954   33471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7
	I0829 18:05:57.456972   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0829 18:05:57.557157   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 ...
	I0829 18:05:57.557189   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7: {Name:mk1e987fdce57178fa8bc6d220419e4e702f2022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.557369   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7 ...
	I0829 18:05:57.557386   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7: {Name:mkcb99136185dcb54ad76bcdd5f51f3bb874c708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.557477   33471 certs.go:381] copying /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 -> /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt
	I0829 18:05:57.557565   33471 certs.go:385] copying /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7 -> /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key
	I0829 18:05:57.557628   33471 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key
	I0829 18:05:57.557653   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt with IP's: []
	I0829 18:05:57.665009   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt ...
	I0829 18:05:57.665035   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt: {Name:mka7b9add077f78b858c255a0787554628ae81a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.665204   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key ...
	I0829 18:05:57.665218   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key: {Name:mkf9f0b064442d85a7a36a00447d2e06028bbb5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.665423   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:05:57.665464   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:05:57.665500   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:05:57.665529   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem (1679 bytes)
	I0829 18:05:57.666108   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:05:57.687482   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:05:57.707435   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:05:57.727015   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 18:05:57.746595   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:05:57.766741   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:05:57.786768   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:05:57.806898   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:05:57.827052   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:05:57.847405   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:05:57.862668   33471 ssh_runner.go:195] Run: openssl version
	I0829 18:05:57.867441   33471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:05:57.875492   33471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.878530   33471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.878584   33471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.884877   33471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:05:57.892902   33471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:05:57.895580   33471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:05:57.895625   33471 kubeadm.go:392] StartCluster: {Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:57.895692   33471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:05:57.895727   33471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:05:57.927582   33471 cri.go:89] found id: ""
	I0829 18:05:57.927651   33471 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:05:57.935503   33471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:05:57.943410   33471 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0829 18:05:57.943456   33471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:05:57.950627   33471 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:05:57.950644   33471 kubeadm.go:157] found existing configuration files:
	
	I0829 18:05:57.950673   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:05:57.957427   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:05:57.957467   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:05:57.964066   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:05:57.971025   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:05:57.971075   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:05:57.977703   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:05:57.984450   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:05:57.984488   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:05:57.991201   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:05:57.998415   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:05:57.998451   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:05:58.005349   33471 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0829 18:05:58.038494   33471 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:05:58.038555   33471 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:05:58.053584   33471 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0829 18:05:58.053680   33471 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0829 18:05:58.053730   33471 kubeadm.go:310] OS: Linux
	I0829 18:05:58.053800   33471 kubeadm.go:310] CGROUPS_CPU: enabled
	I0829 18:05:58.053884   33471 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0829 18:05:58.053987   33471 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0829 18:05:58.054064   33471 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0829 18:05:58.054137   33471 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0829 18:05:58.054208   33471 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0829 18:05:58.054265   33471 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0829 18:05:58.054348   33471 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0829 18:05:58.054436   33471 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0829 18:05:58.098180   33471 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:05:58.098301   33471 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:05:58.098433   33471 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:05:58.103771   33471 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:05:58.106952   33471 out.go:235]   - Generating certificates and keys ...
	I0829 18:05:58.107046   33471 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:05:58.107111   33471 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:05:58.350564   33471 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:05:58.490294   33471 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:05:58.689041   33471 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:05:58.823978   33471 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:05:58.996208   33471 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:05:58.996351   33471 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-970414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:05:59.072936   33471 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:05:59.073085   33471 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-970414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:05:59.434980   33471 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:05:59.665647   33471 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:05:59.738102   33471 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:05:59.738192   33471 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:05:59.867228   33471 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:00.066025   33471 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:00.133026   33471 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:00.270509   33471 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:00.374793   33471 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:00.375247   33471 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:00.377672   33471 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:00.379594   33471 out.go:235]   - Booting up control plane ...
	I0829 18:06:00.379700   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:00.379784   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:00.379861   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:00.387817   33471 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:00.392895   33471 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:00.392953   33471 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:00.472796   33471 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:00.472952   33471 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:00.974304   33471 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.649814ms
	I0829 18:06:00.974388   33471 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:05.476183   33471 kubeadm.go:310] [api-check] The API server is healthy after 4.501825265s
	I0829 18:06:05.486362   33471 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:05.496924   33471 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:05.512283   33471 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:05.512547   33471 kubeadm.go:310] [mark-control-plane] Marking the node addons-970414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:05.518748   33471 kubeadm.go:310] [bootstrap-token] Using token: jzv7iv.d89b87p5nvbumrzo
	I0829 18:06:05.520189   33471 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:05.520291   33471 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:05.522825   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:05.527262   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:05.530214   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:05.532304   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:05.534332   33471 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:05.883610   33471 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:06.302786   33471 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:06.881022   33471 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:06.881690   33471 kubeadm.go:310] 
	I0829 18:06:06.881760   33471 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:06.881773   33471 kubeadm.go:310] 
	I0829 18:06:06.881882   33471 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:06.881912   33471 kubeadm.go:310] 
	I0829 18:06:06.881972   33471 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:06.882062   33471 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:06.882212   33471 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:06.882230   33471 kubeadm.go:310] 
	I0829 18:06:06.882324   33471 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:06.882338   33471 kubeadm.go:310] 
	I0829 18:06:06.882403   33471 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:06.882413   33471 kubeadm.go:310] 
	I0829 18:06:06.882485   33471 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:06.882586   33471 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:06.882657   33471 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:06.882663   33471 kubeadm.go:310] 
	I0829 18:06:06.882741   33471 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:06.882807   33471 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:06.882813   33471 kubeadm.go:310] 
	I0829 18:06:06.882918   33471 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jzv7iv.d89b87p5nvbumrzo \
	I0829 18:06:06.883051   33471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ded35ef35e12d5a5396aa817ddf8ddaebf53b89969d35d052dfa46966e0eb6d3 \
	I0829 18:06:06.883081   33471 kubeadm.go:310] 	--control-plane 
	I0829 18:06:06.883091   33471 kubeadm.go:310] 
	I0829 18:06:06.883194   33471 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:06.883202   33471 kubeadm.go:310] 
	I0829 18:06:06.883319   33471 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jzv7iv.d89b87p5nvbumrzo \
	I0829 18:06:06.883476   33471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ded35ef35e12d5a5396aa817ddf8ddaebf53b89969d35d052dfa46966e0eb6d3 
	I0829 18:06:06.885210   33471 kubeadm.go:310] W0829 18:05:58.036060    1290 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:06.885484   33471 kubeadm.go:310] W0829 18:05:58.036646    1290 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:06.885706   33471 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0829 18:06:06.885836   33471 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:06.885860   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:06:06.885869   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:06:06.887826   33471 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0829 18:06:06.888997   33471 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0829 18:06:06.892550   33471 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0829 18:06:06.892565   33471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0829 18:06:06.908633   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0829 18:06:07.090336   33471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:07.090410   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:07.090410   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-970414 minikube.k8s.io/updated_at=2024_08_29T18_06_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-970414 minikube.k8s.io/primary=true
	I0829 18:06:07.097357   33471 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:07.161653   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:07.662656   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:08.162155   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:08.662485   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:09.161763   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:09.662365   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:10.162060   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:10.662667   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:11.161738   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:11.225686   33471 kubeadm.go:1113] duration metric: took 4.135333724s to wait for elevateKubeSystemPrivileges
	I0829 18:06:11.225730   33471 kubeadm.go:394] duration metric: took 13.330107637s to StartCluster
	I0829 18:06:11.225753   33471 settings.go:142] acquiring lock: {Name:mk30ad9b0ff80001a546f289c6cc726b4c74119c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:11.225898   33471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:06:11.226419   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/kubeconfig: {Name:mk79bdfdd62fbbebbe9b38ab62c3c3cce586ee25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:11.226636   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:11.226662   33471 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:11.226708   33471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:11.226817   33471 addons.go:69] Setting yakd=true in profile "addons-970414"
	I0829 18:06:11.226855   33471 addons.go:69] Setting inspektor-gadget=true in profile "addons-970414"
	I0829 18:06:11.226879   33471 addons.go:69] Setting metrics-server=true in profile "addons-970414"
	I0829 18:06:11.226895   33471 addons.go:234] Setting addon metrics-server=true in "addons-970414"
	I0829 18:06:11.226899   33471 addons.go:234] Setting addon inspektor-gadget=true in "addons-970414"
	I0829 18:06:11.226924   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226936   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226947   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:11.227018   33471 addons.go:69] Setting storage-provisioner=true in profile "addons-970414"
	I0829 18:06:11.227040   33471 addons.go:234] Setting addon storage-provisioner=true in "addons-970414"
	I0829 18:06:11.227065   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227153   33471 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-970414"
	I0829 18:06:11.227185   33471 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-970414"
	I0829 18:06:11.227245   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227436   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227450   33471 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-970414"
	I0829 18:06:11.227475   33471 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-970414"
	I0829 18:06:11.227599   33471 addons.go:69] Setting volcano=true in profile "addons-970414"
	I0829 18:06:11.227602   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227615   33471 addons.go:69] Setting registry=true in profile "addons-970414"
	I0829 18:06:11.227633   33471 addons.go:234] Setting addon volcano=true in "addons-970414"
	I0829 18:06:11.227658   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227660   33471 addons.go:234] Setting addon registry=true in "addons-970414"
	I0829 18:06:11.227676   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227689   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227696   33471 addons.go:69] Setting volumesnapshots=true in profile "addons-970414"
	I0829 18:06:11.227718   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227722   33471 addons.go:234] Setting addon volumesnapshots=true in "addons-970414"
	I0829 18:06:11.227771   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.228076   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228080   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228209   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228356   33471 addons.go:69] Setting gcp-auth=true in profile "addons-970414"
	I0829 18:06:11.228388   33471 mustload.go:65] Loading cluster: addons-970414
	I0829 18:06:11.228584   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:11.228856   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.229427   33471 addons.go:69] Setting ingress=true in profile "addons-970414"
	I0829 18:06:11.229880   33471 addons.go:234] Setting addon ingress=true in "addons-970414"
	I0829 18:06:11.230054   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226869   33471 addons.go:234] Setting addon yakd=true in "addons-970414"
	I0829 18:06:11.232989   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.233478   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227436   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.230761   33471 addons.go:69] Setting ingress-dns=true in profile "addons-970414"
	I0829 18:06:11.234357   33471 addons.go:234] Setting addon ingress-dns=true in "addons-970414"
	I0829 18:06:11.230771   33471 addons.go:69] Setting helm-tiller=true in profile "addons-970414"
	I0829 18:06:11.234426   33471 addons.go:234] Setting addon helm-tiller=true in "addons-970414"
	I0829 18:06:11.234428   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.234448   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.230778   33471 addons.go:69] Setting default-storageclass=true in profile "addons-970414"
	I0829 18:06:11.234865   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.234865   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.234897   33471 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-970414"
	I0829 18:06:11.235176   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.235691   33471 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:11.230855   33471 addons.go:69] Setting cloud-spanner=true in profile "addons-970414"
	I0829 18:06:11.230860   33471 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-970414"
	I0829 18:06:11.236330   33471 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-970414"
	I0829 18:06:11.236358   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.232013   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.236617   33471 addons.go:234] Setting addon cloud-spanner=true in "addons-970414"
	I0829 18:06:11.236656   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.238585   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:11.273627   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	W0829 18:06:11.273734   33471 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:06:11.274066   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.279122   33471 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:11.280278   33471 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:11.280382   33471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:11.280402   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:11.280450   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.280843   33471 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-970414"
	I0829 18:06:11.280884   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.281352   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.282826   33471 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:11.284222   33471 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:11.284250   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:11.284308   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.284471   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.287508   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:11.291534   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:11.291568   33471 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:11.291622   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.293330   33471 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:11.295302   33471 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:11.295320   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:11.295376   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.299261   33471 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:11.300709   33471 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:11.300725   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:11.300791   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.300909   33471 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:11.302087   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:11.302105   33471 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:11.302160   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.307761   33471 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:11.309677   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:11.309700   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:11.309766   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.320621   33471 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:11.325003   33471 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:11.325029   33471 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:11.325160   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.326386   33471 addons.go:234] Setting addon default-storageclass=true in "addons-970414"
	I0829 18:06:11.326435   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.326941   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.339593   33471 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:11.339663   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:11.342553   33471 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:11.342633   33471 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:11.344001   33471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:11.344018   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:11.344070   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.344221   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:11.344232   33471 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:11.344271   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.344391   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:11.346102   33471 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:11.347855   33471 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:11.348296   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:11.348371   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.350422   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:11.351792   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:11.354381   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:11.355688   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:11.357044   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:11.358332   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:11.360150   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:11.362855   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:11.364094   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:11.364346   33471 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:11.364366   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:11.364422   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.366038   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:11.366057   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:11.366122   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.368590   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.368828   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.377128   33471 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:11.377144   33471 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:11.377195   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.382256   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.392162   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.401881   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411536   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411725   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411872   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.412557   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.413514   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.414653   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.415906   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.417956   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.421100   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	W0829 18:06:11.447767   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.447799   33471 retry.go:31] will retry after 276.757001ms: ssh: handshake failed: EOF
	W0829 18:06:11.449293   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.449316   33471 retry.go:31] will retry after 138.739567ms: ssh: handshake failed: EOF
	I0829 18:06:11.457483   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:11.569695   33471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0829 18:06:11.646095   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.646126   33471 retry.go:31] will retry after 425.215295ms: ssh: handshake failed: EOF
	I0829 18:06:11.667860   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:11.667890   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:11.765345   33471 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:11.765373   33471 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:11.848126   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:11.848497   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:11.848514   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:11.859073   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:11.859100   33471 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:11.863017   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:11.864173   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:11.948210   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:11.948298   33471 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:11.948267   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:11.948345   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:11.948424   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:11.951036   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:11.951054   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:11.955551   33471 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:11.955617   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:11.965568   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:11.967321   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:11.967346   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:12.047508   33471 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:12.047545   33471 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:12.060080   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:12.060105   33471 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:12.145272   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:12.145358   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:12.153120   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:12.153146   33471 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:12.167673   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:12.256341   33471 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:12.256372   33471 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:12.346507   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:12.346537   33471 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:12.351483   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:12.355630   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:12.358674   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:12.358700   33471 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:12.464885   33471 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.007354776s)
	I0829 18:06:12.464974   33471 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:12.465902   33471 node_ready.go:35] waiting up to 6m0s for node "addons-970414" to be "Ready" ...
	I0829 18:06:12.554150   33471 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:12.554184   33471 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:12.564392   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:12.564475   33471 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:12.647807   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:12.647836   33471 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:12.651834   33471 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:12.651871   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:12.659639   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:12.659667   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:12.850643   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:12.850731   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:12.954879   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:13.046953   33471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:13.046981   33471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:13.050318   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:13.061740   33471 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-970414" context rescaled to 1 replicas
	I0829 18:06:13.161545   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:13.161570   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:13.352888   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:13.359173   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:13.359202   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:13.368369   33471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:13.368396   33471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:13.446352   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:13.658489   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:13.658522   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:13.863922   33471 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:13.863951   33471 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:14.153008   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:14.153084   33471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:14.265801   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:14.265888   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:14.346440   33471 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:14.346546   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:14.457711   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:14.467018   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:14.467092   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:14.664740   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:15.054818   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:15.054890   33471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:15.449637   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:15.751232   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.903064634s)
	I0829 18:06:15.751343   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.888297403s)
	I0829 18:06:16.167149   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.302940806s)
	I0829 18:06:16.167480   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.219108729s)
	I0829 18:06:16.167583   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.201985375s)
	I0829 18:06:16.167666   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.999962951s)
	I0829 18:06:16.167708   33471 addons.go:475] Verifying addon registry=true in "addons-970414"
	I0829 18:06:16.167991   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.81647825s)
	I0829 18:06:16.168188   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.812528468s)
	I0829 18:06:16.169994   33471 out.go:177] * Verifying registry addon...
	I0829 18:06:16.172294   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:06:16.355174   33471 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:16.355543   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0829 18:06:16.453902   33471 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0829 18:06:16.760111   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:17.052900   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:17.348900   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:17.746877   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.247659   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.568953   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:06:18.569108   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:18.586232   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:18.748308   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.768683   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.813695623s)
	W0829 18:06:18.768747   33471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:18.768797   33471 retry.go:31] will retry after 129.631111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:18.768934   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.718574104s)
	I0829 18:06:18.768956   33471 addons.go:475] Verifying addon metrics-server=true in "addons-970414"
	I0829 18:06:18.769122   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.416207866s)
	I0829 18:06:18.769138   33471 addons.go:475] Verifying addon ingress=true in "addons-970414"
	I0829 18:06:18.769584   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.323191353s)
	I0829 18:06:18.769666   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.311841863s)
	I0829 18:06:18.772109   33471 out.go:177] * Verifying ingress addon...
	I0829 18:06:18.772111   33471 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-970414 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:06:18.774901   33471 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:06:18.784226   33471 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:06:18.784247   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:18.864874   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:06:18.881665   33471 addons.go:234] Setting addon gcp-auth=true in "addons-970414"
	I0829 18:06:18.881720   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:18.882075   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:18.899292   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:18.901129   33471 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:06:18.901171   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:18.920489   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:19.177567   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:19.286115   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:19.554486   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:19.749842   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:19.848969   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.399253398s)
	I0829 18:06:19.849250   33471 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-970414"
	I0829 18:06:19.851236   33471 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:06:19.854515   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:06:19.869263   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:19.870161   33471 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:19.870184   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:20.176202   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:20.279572   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:20.357721   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:20.675473   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:20.778813   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:20.857772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.176058   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:21.279045   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:21.357952   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.676019   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:21.778347   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:21.854733   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.955399804s)
	I0829 18:06:21.854798   33471 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.953652114s)
	I0829 18:06:21.857145   33471 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:06:21.857500   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.859828   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:21.861259   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:06:21.861280   33471 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:06:21.879467   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:06:21.879489   33471 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:06:21.895886   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:21.895909   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:06:21.954214   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:21.969114   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:22.176618   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:22.279610   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:22.358000   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:22.553735   33471 addons.go:475] Verifying addon gcp-auth=true in "addons-970414"
	I0829 18:06:22.555569   33471 out.go:177] * Verifying gcp-auth addon...
	I0829 18:06:22.558244   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:06:22.560579   33471 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:06:22.560596   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:22.674902   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:22.778700   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:22.858118   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:23.061602   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:23.175002   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:23.278641   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:23.358524   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:23.561370   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:23.675585   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:23.778306   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:23.857475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:24.061119   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:24.175442   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:24.278372   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:24.357538   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:24.468405   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:24.562284   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:24.676070   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:24.778626   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:24.857813   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:25.061006   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:25.175734   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:25.278745   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:25.357486   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:25.562423   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:25.675499   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:25.778380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:25.857418   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:26.061541   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:26.174800   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:26.278575   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:26.357690   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:26.468882   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:26.561126   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:26.675797   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:26.778490   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:26.857597   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:27.061577   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:27.174998   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:27.278808   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:27.357899   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:27.561262   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:27.675554   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:27.778294   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:27.857440   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.061639   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:28.175012   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:28.278856   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:28.358354   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.560629   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:28.674835   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:28.778609   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:28.857628   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.968635   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:29.060906   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:29.175160   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:29.279076   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:29.358063   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:29.561636   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:29.674927   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:29.779426   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:29.863822   33471 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:29.863848   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:29.969260   33471 node_ready.go:49] node "addons-970414" has status "Ready":"True"
	I0829 18:06:29.969289   33471 node_ready.go:38] duration metric: took 17.50332165s for node "addons-970414" to be "Ready" ...
	I0829 18:06:29.969301   33471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:29.977908   33471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:30.061963   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:30.176070   33471 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:30.176093   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:30.279944   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:30.381182   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:30.561917   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:30.675717   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:30.779380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:30.858908   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.061158   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:31.175903   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:31.278733   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:31.360013   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.483381   33471 pod_ready.go:93] pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.483402   33471 pod_ready.go:82] duration metric: took 1.505470075s for pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.483421   33471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.487161   33471 pod_ready.go:93] pod "etcd-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.487178   33471 pod_ready.go:82] duration metric: took 3.750939ms for pod "etcd-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.487191   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.490614   33471 pod_ready.go:93] pod "kube-apiserver-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.490632   33471 pod_ready.go:82] duration metric: took 3.434179ms for pod "kube-apiserver-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.490640   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.493931   33471 pod_ready.go:93] pod "kube-controller-manager-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.493950   33471 pod_ready.go:82] duration metric: took 3.301077ms for pod "kube-controller-manager-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.493962   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwgq4" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.561772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:31.569942   33471 pod_ready.go:93] pod "kube-proxy-mwgq4" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.569964   33471 pod_ready.go:82] duration metric: took 75.994271ms for pod "kube-proxy-mwgq4" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.569973   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.676604   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:31.779535   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:31.859414   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.970319   33471 pod_ready.go:93] pod "kube-scheduler-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.970345   33471 pod_ready.go:82] duration metric: took 400.364012ms for pod "kube-scheduler-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.970358   33471 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:32.062142   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:32.175938   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:32.279359   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:32.358320   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:32.562203   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:32.675175   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:32.779380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:32.858562   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.061806   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:33.175414   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:33.278190   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:33.359497   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.566753   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:33.679816   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:33.780038   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:33.859085   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.976545   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:34.061607   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:34.175533   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:34.278647   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:34.358847   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:34.562865   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:34.676116   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:34.778980   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:34.859383   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.061690   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:35.175979   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:35.278700   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:35.358987   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.561990   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:35.676053   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:35.778889   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:35.859309   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.978326   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:36.061789   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:36.175701   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:36.278911   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:36.358733   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:36.561288   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:36.675973   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:36.778702   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:36.859052   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:37.062147   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:37.175953   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:37.278732   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:37.358897   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:37.562562   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:37.677246   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:37.779993   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:37.858836   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:38.061840   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:38.175582   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:38.279853   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:38.358730   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:38.475807   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:38.562000   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:38.675376   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:38.779020   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:38.858866   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:39.061799   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:39.175516   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:39.278386   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:39.358349   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:39.561877   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:39.675407   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:39.778631   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:39.858049   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:40.061166   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:40.175901   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.279026   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.361677   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:40.476589   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:40.562707   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:40.677196   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.778687   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.858582   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.062646   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:41.179136   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.278942   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.359243   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.561503   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:41.676508   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.779738   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.859475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.062106   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:42.176135   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.279258   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.358777   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.562048   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:42.675925   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.779048   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.879772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.975713   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:43.061010   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:43.175551   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.279093   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.358897   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:43.562475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:43.675599   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.778529   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.858277   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.062101   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:44.176457   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.279344   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.357937   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.562224   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:44.676679   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.779034   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.858759   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.976061   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:45.061405   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:45.176561   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.278694   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.358550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:45.562365   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:45.675919   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.778988   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.858884   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.061118   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:46.175480   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.278388   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.358500   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.561876   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:46.676217   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.779623   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.858934   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.976665   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:47.062438   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:47.176856   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.279274   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.360207   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:47.562049   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:47.676310   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.847611   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.860403   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.061438   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:48.176542   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.279914   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:48.358708   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.561468   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:48.676103   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.779307   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:48.858934   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.062411   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:49.175774   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.279108   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.358770   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.475745   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:49.561498   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:49.676506   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.779122   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.859246   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.061522   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:50.184207   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.285183   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.359392   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.563222   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:50.676338   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.779289   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.859315   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.063561   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:51.175786   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.278876   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.359522   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.477135   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:51.561730   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:51.675433   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.779706   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.858484   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.061448   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:52.176160   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.279349   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.380355   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.561333   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:52.675905   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.778605   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.858471   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.061429   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:53.176294   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.279494   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.358900   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.561935   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:53.675675   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.780447   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.858317   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.975085   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:54.061527   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:54.176015   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.278916   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.358728   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:54.561195   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:54.676074   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.778888   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.858526   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.061961   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:55.175994   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.278912   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.358696   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.562439   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:55.676087   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.779100   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.858417   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.975459   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:56.060830   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:56.175297   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.279178   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:56.358860   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:56.561356   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:56.676270   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.779497   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:56.859993   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.062783   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:57.254123   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.348605   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.359917   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.561267   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:57.748519   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.849949   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.859389   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.049715   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:58.061798   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:58.176534   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.348936   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.359169   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.561969   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:58.676240   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.779911   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.858659   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.062278   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:59.176444   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.279797   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.359146   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.561362   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:59.676652   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.778887   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.859071   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.061841   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:00.176029   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.278919   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:00.359145   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.476358   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:00.562430   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:00.676749   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.778262   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:00.859251   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.061470   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:01.176363   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.279417   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.361332   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.562496   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:01.676178   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.779058   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.859261   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.061640   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:02.175950   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.279315   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.359088   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.476615   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:02.561997   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:02.675860   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.778891   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.859381   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.061658   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:03.175437   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.279450   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.380178   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.561274   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:03.676141   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.778914   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.858550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.061119   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:04.175986   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.279413   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.358524   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.476911   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:04.561419   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:04.676126   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.779641   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.859408   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.061403   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:05.176552   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.278788   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.358106   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.561720   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:05.677343   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.779750   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.858550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.061549   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:06.176475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.279830   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.358299   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.561385   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:06.676305   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.779396   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.858256   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.976151   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:07.062281   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.176114   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.279243   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.359098   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.561770   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.675691   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.778345   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.858383   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.062024   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.175973   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.278626   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.359845   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.562272   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.676299   33471 kapi.go:107] duration metric: took 52.503998136s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:08.779614   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.858729   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.061667   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.278948   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.358825   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.475603   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:09.561133   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.803043   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.869349   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.061639   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.279248   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.358623   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.561862   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.779245   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.858210   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.062082   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.279187   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.380296   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.476169   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:11.562124   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.780090   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.859518   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.061848   33471 kapi.go:107] duration metric: took 49.50360321s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:07:12.064235   33471 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-970414 cluster.
	I0829 18:07:12.065845   33471 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:07:12.067312   33471 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:07:12.279829   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.380390   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.781279   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.858496   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.279475   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.357989   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.476371   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:13.778868   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.859012   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.278985   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.358948   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.778506   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.858490   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.279669   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.358327   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.778714   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.859145   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.975951   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:16.279416   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.358353   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.778955   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.879919   33471 kapi.go:107] duration metric: took 57.025400666s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:07:17.278506   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.779629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.279662   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.475735   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:18.778865   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.279629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.778833   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.279629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.476070   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:20.779310   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.278746   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.778588   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.279091   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.778744   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.975809   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:23.279672   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.778698   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.279136   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.779600   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.975845   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:25.279527   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.778694   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.279166   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.778678   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.976229   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:27.279572   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.779925   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.278543   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.778902   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.279513   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.475862   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:29.778825   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.278410   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.779205   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.278785   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.778310   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.975687   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:32.279208   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.778950   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.278632   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.778869   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.975755   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:34.279008   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.849062   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.279182   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.849707   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.047727   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:36.348740   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.779662   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.279104   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.779192   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.279217   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.476596   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:38.778967   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.279557   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.778520   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.279154   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.781434   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.976165   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:41.300505   33471 kapi.go:107] duration metric: took 1m22.525606095s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:07:41.302197   33471 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, helm-tiller, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0829 18:07:41.303840   33471 addons.go:510] duration metric: took 1m30.077118852s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin helm-tiller storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0829 18:07:43.475312   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:45.475559   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:47.975734   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:50.475293   33471 pod_ready.go:93] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:50.475315   33471 pod_ready.go:82] duration metric: took 1m18.504950495s for pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.475325   33471 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.479409   33471 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:50.479430   33471 pod_ready.go:82] duration metric: took 4.09992ms for pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.479449   33471 pod_ready.go:39] duration metric: took 1m20.510134495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:50.479465   33471 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:50.479496   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:50.479553   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:50.512656   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:50.512676   33471 cri.go:89] found id: ""
	I0829 18:07:50.512684   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:50.512723   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.515973   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:50.516034   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:50.548643   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:50.548662   33471 cri.go:89] found id: ""
	I0829 18:07:50.548669   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:50.548718   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.551901   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:50.551963   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:50.583669   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:50.583702   33471 cri.go:89] found id: ""
	I0829 18:07:50.583709   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:50.583748   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.586859   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:50.586933   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:50.618860   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:50.618883   33471 cri.go:89] found id: ""
	I0829 18:07:50.618890   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:50.618930   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.622032   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:50.622084   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:50.653704   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:50.653729   33471 cri.go:89] found id: ""
	I0829 18:07:50.653740   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:50.653792   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.657019   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:50.657077   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:50.690012   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:50.690036   33471 cri.go:89] found id: ""
	I0829 18:07:50.690045   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:50.690086   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.693191   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:50.693236   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:50.726118   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:50.726139   33471 cri.go:89] found id: ""
	I0829 18:07:50.726149   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:50.726190   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.729505   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:50.729526   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:50.767861   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:50.767892   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:50.779540   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:50.779567   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:50.822562   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:50.822592   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:50.872590   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:50.872628   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:50.904925   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:50.904951   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:50.960999   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:50.961033   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:50.993169   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:50.993195   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:51.072501   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:51.072533   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:51.113527   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:51.113556   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:51.183067   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:51.183100   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:51.281419   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:51.281446   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:53.816429   33471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:53.829736   33471 api_server.go:72] duration metric: took 1m42.603041834s to wait for apiserver process to appear ...
	I0829 18:07:53.829767   33471 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:53.829801   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:53.829844   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:53.862325   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:53.862351   33471 cri.go:89] found id: ""
	I0829 18:07:53.862361   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:53.862409   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.865569   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:53.865646   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:53.898226   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:53.898247   33471 cri.go:89] found id: ""
	I0829 18:07:53.898255   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:53.898296   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.901566   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:53.901628   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:53.934199   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:53.934218   33471 cri.go:89] found id: ""
	I0829 18:07:53.934225   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:53.934265   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.937354   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:53.937402   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:53.970450   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:53.970472   33471 cri.go:89] found id: ""
	I0829 18:07:53.970479   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:53.970524   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.973830   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:53.973887   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:54.006146   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:54.006169   33471 cri.go:89] found id: ""
	I0829 18:07:54.006177   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:54.006224   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.009454   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:54.009512   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:54.041172   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:54.041191   33471 cri.go:89] found id: ""
	I0829 18:07:54.041198   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:54.041249   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.044312   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:54.044368   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:54.083976   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:54.084001   33471 cri.go:89] found id: ""
	I0829 18:07:54.084009   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:54.084049   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.087300   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:54.087324   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:54.098754   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:54.098782   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:54.161684   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:54.161716   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:54.241049   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:54.241085   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:54.273621   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:54.273646   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:54.331096   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:54.331132   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:54.363448   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:54.363477   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:54.431857   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:54.431896   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:54.528063   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:54.528089   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:54.577648   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:54.577681   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:54.611916   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:54.611946   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:54.647955   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:54.647983   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:57.189075   33471 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0829 18:07:57.192542   33471 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0829 18:07:57.193379   33471 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:57.193402   33471 api_server.go:131] duration metric: took 3.363628924s to wait for apiserver health ...
	I0829 18:07:57.193411   33471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:57.193432   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:57.193471   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:57.225819   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:57.225841   33471 cri.go:89] found id: ""
	I0829 18:07:57.225850   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:57.225896   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.228901   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:57.228944   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:57.260637   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:57.260656   33471 cri.go:89] found id: ""
	I0829 18:07:57.260663   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:57.260704   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.263753   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:57.263801   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:57.294974   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:57.294997   33471 cri.go:89] found id: ""
	I0829 18:07:57.295006   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:57.295058   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.298097   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:57.298155   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:57.329667   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:57.329690   33471 cri.go:89] found id: ""
	I0829 18:07:57.329698   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:57.329749   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.332928   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:57.332984   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:57.364944   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:57.364962   33471 cri.go:89] found id: ""
	I0829 18:07:57.364970   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:57.365005   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.368114   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:57.368166   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:57.401257   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:57.401276   33471 cri.go:89] found id: ""
	I0829 18:07:57.401283   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:57.401332   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.404460   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:57.404506   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:57.435578   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:57.435600   33471 cri.go:89] found id: ""
	I0829 18:07:57.435607   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:57.435647   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.438689   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:57.438711   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:57.493400   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:57.493428   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:57.565541   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:57.565577   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:57.635720   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:57.635750   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:57.647194   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:57.647217   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:57.689192   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:57.689228   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:57.738329   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:57.738357   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:57.771675   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:57.771698   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:57.802656   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:57.802684   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:57.842425   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:57.842451   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:57.937146   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:57.937174   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:57.974724   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:57.974752   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:08:00.516381   33471 system_pods.go:59] 19 kube-system pods found
	I0829 18:08:00.516420   33471 system_pods.go:61] "coredns-6f6b679f8f-jxrb9" [99ffdce3-4a2f-4216-95ca-28db164333a2] Running
	I0829 18:08:00.516426   33471 system_pods.go:61] "csi-hostpath-attacher-0" [b33c21ec-bc06-47b0-b7b4-78c5392d31f7] Running
	I0829 18:08:00.516431   33471 system_pods.go:61] "csi-hostpath-resizer-0" [ae955038-1da8-4d77-a461-9dccfe623922] Running
	I0829 18:08:00.516437   33471 system_pods.go:61] "csi-hostpathplugin-5wlj7" [c7f02d44-110a-4971-b90a-521977151630] Running
	I0829 18:08:00.516442   33471 system_pods.go:61] "etcd-addons-970414" [8daf5c22-02d4-44e0-8a5c-0d5b9c0cd7b5] Running
	I0829 18:08:00.516447   33471 system_pods.go:61] "kindnet-95zg6" [612be856-b5ad-4571-9908-168f86f5b273] Running
	I0829 18:08:00.516452   33471 system_pods.go:61] "kube-apiserver-addons-970414" [549d4f3b-086e-40f7-9b7a-513220af52cd] Running
	I0829 18:08:00.516457   33471 system_pods.go:61] "kube-controller-manager-addons-970414" [00d3410f-773e-471f-9716-7fc678c6f5a3] Running
	I0829 18:08:00.516466   33471 system_pods.go:61] "kube-ingress-dns-minikube" [6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1] Running
	I0829 18:08:00.516471   33471 system_pods.go:61] "kube-proxy-mwgq4" [39ef4c84-6d42-40f2-9eb2-af13d2c9a233] Running
	I0829 18:08:00.516479   33471 system_pods.go:61] "kube-scheduler-addons-970414" [75453275-6d16-4fc0-944d-d30987bfccb2] Running
	I0829 18:08:00.516485   33471 system_pods.go:61] "metrics-server-8988944d9-jss9n" [a866f6c5-ff40-4062-986b-ddae9310879c] Running
	I0829 18:08:00.516490   33471 system_pods.go:61] "nvidia-device-plugin-daemonset-njmrn" [5c975a82-28c1-431d-b4e4-b89312486f53] Running
	I0829 18:08:00.516497   33471 system_pods.go:61] "registry-6fb4cdfc84-srp9d" [a6e6445c-947b-4527-a5b7-e1710ec0b292] Running
	I0829 18:08:00.516500   33471 system_pods.go:61] "registry-proxy-56c89" [c9c1a8d7-92a0-458c-a4fa-4271bfd8f736] Running
	I0829 18:08:00.516506   33471 system_pods.go:61] "snapshot-controller-56fcc65765-c9pzh" [b3e9483b-e20c-4b8d-b5b4-53940d1f7621] Running
	I0829 18:08:00.516509   33471 system_pods.go:61] "snapshot-controller-56fcc65765-w7vbq" [0a038557-f899-4971-87c0-4a476ae40ff9] Running
	I0829 18:08:00.516513   33471 system_pods.go:61] "storage-provisioner" [7cffe50e-abe7-4d9c-9c04-88e86ad1ffb9] Running
	I0829 18:08:00.516516   33471 system_pods.go:61] "tiller-deploy-b48cc5f79-h8shr" [53f4571a-d63e-4721-aa85-b44922772189] Running
	I0829 18:08:00.516522   33471 system_pods.go:74] duration metric: took 3.32310726s to wait for pod list to return data ...
	I0829 18:08:00.516531   33471 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:08:00.518762   33471 default_sa.go:45] found service account: "default"
	I0829 18:08:00.518781   33471 default_sa.go:55] duration metric: took 2.241797ms for default service account to be created ...
	I0829 18:08:00.518789   33471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:08:00.527444   33471 system_pods.go:86] 19 kube-system pods found
	I0829 18:08:00.527470   33471 system_pods.go:89] "coredns-6f6b679f8f-jxrb9" [99ffdce3-4a2f-4216-95ca-28db164333a2] Running
	I0829 18:08:00.527475   33471 system_pods.go:89] "csi-hostpath-attacher-0" [b33c21ec-bc06-47b0-b7b4-78c5392d31f7] Running
	I0829 18:08:00.527479   33471 system_pods.go:89] "csi-hostpath-resizer-0" [ae955038-1da8-4d77-a461-9dccfe623922] Running
	I0829 18:08:00.527483   33471 system_pods.go:89] "csi-hostpathplugin-5wlj7" [c7f02d44-110a-4971-b90a-521977151630] Running
	I0829 18:08:00.527486   33471 system_pods.go:89] "etcd-addons-970414" [8daf5c22-02d4-44e0-8a5c-0d5b9c0cd7b5] Running
	I0829 18:08:00.527490   33471 system_pods.go:89] "kindnet-95zg6" [612be856-b5ad-4571-9908-168f86f5b273] Running
	I0829 18:08:00.527493   33471 system_pods.go:89] "kube-apiserver-addons-970414" [549d4f3b-086e-40f7-9b7a-513220af52cd] Running
	I0829 18:08:00.527496   33471 system_pods.go:89] "kube-controller-manager-addons-970414" [00d3410f-773e-471f-9716-7fc678c6f5a3] Running
	I0829 18:08:00.527500   33471 system_pods.go:89] "kube-ingress-dns-minikube" [6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1] Running
	I0829 18:08:00.527503   33471 system_pods.go:89] "kube-proxy-mwgq4" [39ef4c84-6d42-40f2-9eb2-af13d2c9a233] Running
	I0829 18:08:00.527507   33471 system_pods.go:89] "kube-scheduler-addons-970414" [75453275-6d16-4fc0-944d-d30987bfccb2] Running
	I0829 18:08:00.527510   33471 system_pods.go:89] "metrics-server-8988944d9-jss9n" [a866f6c5-ff40-4062-986b-ddae9310879c] Running
	I0829 18:08:00.527514   33471 system_pods.go:89] "nvidia-device-plugin-daemonset-njmrn" [5c975a82-28c1-431d-b4e4-b89312486f53] Running
	I0829 18:08:00.527520   33471 system_pods.go:89] "registry-6fb4cdfc84-srp9d" [a6e6445c-947b-4527-a5b7-e1710ec0b292] Running
	I0829 18:08:00.527523   33471 system_pods.go:89] "registry-proxy-56c89" [c9c1a8d7-92a0-458c-a4fa-4271bfd8f736] Running
	I0829 18:08:00.527526   33471 system_pods.go:89] "snapshot-controller-56fcc65765-c9pzh" [b3e9483b-e20c-4b8d-b5b4-53940d1f7621] Running
	I0829 18:08:00.527532   33471 system_pods.go:89] "snapshot-controller-56fcc65765-w7vbq" [0a038557-f899-4971-87c0-4a476ae40ff9] Running
	I0829 18:08:00.527535   33471 system_pods.go:89] "storage-provisioner" [7cffe50e-abe7-4d9c-9c04-88e86ad1ffb9] Running
	I0829 18:08:00.527538   33471 system_pods.go:89] "tiller-deploy-b48cc5f79-h8shr" [53f4571a-d63e-4721-aa85-b44922772189] Running
	I0829 18:08:00.527546   33471 system_pods.go:126] duration metric: took 8.752911ms to wait for k8s-apps to be running ...
	I0829 18:08:00.527554   33471 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:08:00.527594   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:08:00.539104   33471 system_svc.go:56] duration metric: took 11.540627ms WaitForService to wait for kubelet
	I0829 18:08:00.539136   33471 kubeadm.go:582] duration metric: took 1m49.312445201s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:08:00.539157   33471 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:08:00.542184   33471 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0829 18:08:00.542215   33471 node_conditions.go:123] node cpu capacity is 8
	I0829 18:08:00.542232   33471 node_conditions.go:105] duration metric: took 3.069703ms to run NodePressure ...
	I0829 18:08:00.542247   33471 start.go:241] waiting for startup goroutines ...
	I0829 18:08:00.542258   33471 start.go:246] waiting for cluster config update ...
	I0829 18:08:00.542277   33471 start.go:255] writing updated cluster config ...
	I0829 18:08:00.542602   33471 ssh_runner.go:195] Run: rm -f paused
	I0829 18:08:00.589612   33471 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:08:00.591791   33471 out.go:177] * Done! kubectl is now configured to use "addons-970414" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:18:34 addons-970414 crio[1030]: time="2024-08-29 18:18:34.177025578Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/07ca96ecf263a6b18923ac71f260610f080b2074fe8d6c6789e0c540c70c47e5/merged/etc/group: no such file or directory"
	Aug 29 18:18:34 addons-970414 crio[1030]: time="2024-08-29 18:18:34.211958414Z" level=info msg="Created container 28b4757a4c973d6320b86eafc5310e36bac4b26130a5c4c1b284dc7b43af70a0: default/hello-world-app-55bf9c44b4-28jdv/hello-world-app" id=176b3111-36ef-4f8f-a79b-1dcd23c02bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 29 18:18:34 addons-970414 crio[1030]: time="2024-08-29 18:18:34.212488028Z" level=info msg="Starting container: 28b4757a4c973d6320b86eafc5310e36bac4b26130a5c4c1b284dc7b43af70a0" id=9033b468-99a1-46a6-a15c-4db8eeec53b5 name=/runtime.v1.RuntimeService/StartContainer
	Aug 29 18:18:34 addons-970414 crio[1030]: time="2024-08-29 18:18:34.218215891Z" level=info msg="Started container" PID=12191 containerID=28b4757a4c973d6320b86eafc5310e36bac4b26130a5c4c1b284dc7b43af70a0 description=default/hello-world-app-55bf9c44b4-28jdv/hello-world-app id=9033b468-99a1-46a6-a15c-4db8eeec53b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=68bfbf6f334af60859977cf29b6981bc6c2b012793bfed6c9e0ace8529f55bcb
	Aug 29 18:18:34 addons-970414 crio[1030]: time="2024-08-29 18:18:34.358938266Z" level=info msg="Removing container: 56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40" id=2017a330-90cd-4066-bf29-c44c8683bd89 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 29 18:18:34 addons-970414 crio[1030]: time="2024-08-29 18:18:34.373488143Z" level=info msg="Removed container 56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=2017a330-90cd-4066-bf29-c44c8683bd89 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 29 18:18:35 addons-970414 crio[1030]: time="2024-08-29 18:18:35.875396825Z" level=info msg="Stopping container: 87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd (timeout: 2s)" id=a8f1b1b2-f713-4074-b3b0-c0415d52c795 name=/runtime.v1.RuntimeService/StopContainer
	Aug 29 18:18:37 addons-970414 crio[1030]: time="2024-08-29 18:18:37.881762664Z" level=warning msg="Stopping container 87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=a8f1b1b2-f713-4074-b3b0-c0415d52c795 name=/runtime.v1.RuntimeService/StopContainer
	Aug 29 18:18:37 addons-970414 conmon[6360]: conmon 87396c3a6a26a4f50f33 <ninfo>: container 6372 exited with status 137
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.011993018Z" level=info msg="Stopped container 87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd: ingress-nginx/ingress-nginx-controller-bc57996ff-cv22w/controller" id=a8f1b1b2-f713-4074-b3b0-c0415d52c795 name=/runtime.v1.RuntimeService/StopContainer
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.012506696Z" level=info msg="Stopping pod sandbox: 410fabb8bf1a37ad3121cc9bf0d9168eb521115212766997ca4ae45436c0ee7e" id=9678eba0-62c5-4e70-8ccc-440b4967458c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.015572183Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GFSZUBCFILYZJZYB - [0:0]\n:KUBE-HP-GUBYUHQDBFJBIREF - [0:0]\n-X KUBE-HP-GUBYUHQDBFJBIREF\n-X KUBE-HP-GFSZUBCFILYZJZYB\nCOMMIT\n"
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.016903537Z" level=info msg="Closing host port tcp:80"
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.016941430Z" level=info msg="Closing host port tcp:443"
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.018218491Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.018235691Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.018355095Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-cv22w Namespace:ingress-nginx ID:410fabb8bf1a37ad3121cc9bf0d9168eb521115212766997ca4ae45436c0ee7e UID:4e53ad5a-0419-423f-baf6-3ccfce3a4256 NetNS:/var/run/netns/b0e14f0a-7bb6-4c0e-ba98-acc7f93e8afe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.018461245Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-cv22w from CNI network \"kindnet\" (type=ptp)"
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.050080041Z" level=info msg="Stopped pod sandbox: 410fabb8bf1a37ad3121cc9bf0d9168eb521115212766997ca4ae45436c0ee7e" id=9678eba0-62c5-4e70-8ccc-440b4967458c name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.369571846Z" level=info msg="Removing container: 87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd" id=ff94cd8c-27ed-481b-9583-e06fdd1e4777 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 29 18:18:38 addons-970414 crio[1030]: time="2024-08-29 18:18:38.381475340Z" level=info msg="Removed container 87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd: ingress-nginx/ingress-nginx-controller-bc57996ff-cv22w/controller" id=ff94cd8c-27ed-481b-9583-e06fdd1e4777 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 29 18:18:40 addons-970414 crio[1030]: time="2024-08-29 18:18:40.155652293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b72a975-bcf9-4485-b25c-c5a7296c4f93 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:18:40 addons-970414 crio[1030]: time="2024-08-29 18:18:40.155947425Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4b72a975-bcf9-4485-b25c-c5a7296c4f93 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:18:40 addons-970414 crio[1030]: time="2024-08-29 18:18:40.156653888Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ed769a3d-ab23-4231-8009-f24db066cbe8 name=/runtime.v1.ImageService/PullImage
	Aug 29 18:18:40 addons-970414 crio[1030]: time="2024-08-29 18:18:40.157792985Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	28b4757a4c973       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        8 seconds ago       Running             hello-world-app           0                   68bfbf6f334af       hello-world-app-55bf9c44b4-28jdv
	03f63bd4b1c48       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   4fa70648299cc       nginx
	a19318a738251       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     3                   763e4aa04b031       ingress-nginx-admission-patch-c8fc7
	751a953e0230f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   a12fb4e4da859       gcp-auth-89d5ffd79-cj6cz
	178bb778ee85e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   cac81433fd37a       ingress-nginx-admission-create-hxp8v
	6888613b3e8ca       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        12 minutes ago      Running             metrics-server            0                   5c6d6ccdb7bd8       metrics-server-8988944d9-jss9n
	3a16651d14fd4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             12 minutes ago      Running             coredns                   0                   c991950d1479a       coredns-6f6b679f8f-jxrb9
	fc284d6f42abd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   1c77efb0d73c6       storage-provisioner
	fc407b261b55a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                           12 minutes ago      Running             kindnet-cni               0                   3a14aa7cbd5ba       kindnet-95zg6
	f3c75142fecd2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             12 minutes ago      Running             kube-proxy                0                   6259dfbf37c5a       kube-proxy-mwgq4
	cb91925e81486       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             12 minutes ago      Running             kube-scheduler            0                   3af0a40f28992       kube-scheduler-addons-970414
	5034cc120442d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   989f4e8da94ea       etcd-addons-970414
	70642d5cd8ef0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             12 minutes ago      Running             kube-controller-manager   0                   740a72692bfef       kube-controller-manager-addons-970414
	b65cd62e3477a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             12 minutes ago      Running             kube-apiserver            0                   1be263bee45c2       kube-apiserver-addons-970414
	
	
	==> coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] <==
	[INFO] 10.244.0.19:41065 - 41812 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110073s
	[INFO] 10.244.0.19:33314 - 7978 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068562s
	[INFO] 10.244.0.19:33314 - 63253 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123722s
	[INFO] 10.244.0.19:33313 - 15372 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005052571s
	[INFO] 10.244.0.19:33313 - 8969 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005240532s
	[INFO] 10.244.0.19:56468 - 13948 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004476729s
	[INFO] 10.244.0.19:56468 - 34426 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005241684s
	[INFO] 10.244.0.19:36060 - 35696 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004520671s
	[INFO] 10.244.0.19:36060 - 15990 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004576926s
	[INFO] 10.244.0.19:44003 - 15478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079273s
	[INFO] 10.244.0.19:44003 - 29556 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115076s
	[INFO] 10.244.0.20:49487 - 52545 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147201s
	[INFO] 10.244.0.20:59535 - 5474 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000116485s
	[INFO] 10.244.0.20:51018 - 29008 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119345s
	[INFO] 10.244.0.20:51904 - 9903 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179576s
	[INFO] 10.244.0.20:44385 - 47503 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138771s
	[INFO] 10.244.0.20:53196 - 482 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137631s
	[INFO] 10.244.0.20:52299 - 24778 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005524264s
	[INFO] 10.244.0.20:56050 - 55826 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006091549s
	[INFO] 10.244.0.20:52775 - 61641 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004707679s
	[INFO] 10.244.0.20:52194 - 42579 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00473342s
	[INFO] 10.244.0.20:58349 - 16179 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004578594s
	[INFO] 10.244.0.20:59907 - 15287 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006119565s
	[INFO] 10.244.0.20:54560 - 33495 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00068612s
	[INFO] 10.244.0.20:50005 - 1476 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000775831s
	
	
	==> describe nodes <==
	Name:               addons-970414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-970414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-970414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-970414
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-970414
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:18:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-970414
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 f871f2a5cd3540f79b6c200227bc35ed
	  System UUID:                49e09a6c-969e-4bfb-9562-e1e953ad9e00
	  Boot ID:                    fb799716-ba24-44f3-8d84-c852ba38aeb7
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-28jdv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-cj6cz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-jxrb9                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-970414                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-95zg6                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-970414             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-970414    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mwgq4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-970414             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-8988944d9-jss9n           100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-970414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-970414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-970414 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-970414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-970414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-970414 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-970414 event: Registered Node addons-970414 in Controller
	  Normal   NodeReady                12m                kubelet          Node addons-970414 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000853] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000677] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000668] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000729] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.580338] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.044213] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005611] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013638] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002516] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013312] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.261359] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 18:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +1.032106] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +2.011848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +4.223585] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +8.191236] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[ +16.126426] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[Aug29 18:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	
	
	==> etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] <==
	{"level":"warn","ts":"2024-08-29T18:06:14.860272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.829512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-29T18:06:14.862833Z","caller":"traceutil/trace.go:171","msg":"trace[1886778643] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:447; }","duration":"113.399296ms","start":"2024-08-29T18:06:14.749412Z","end":"2024-08-29T18:06:14.862811Z","steps":["trace[1886778643] 'agreement among raft nodes before linearized reading'  (duration: 110.800471ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:14.865475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.235889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:14.865581Z","caller":"traceutil/trace.go:171","msg":"trace[783420174] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:456; }","duration":"104.355912ms","start":"2024-08-29T18:06:14.761212Z","end":"2024-08-29T18:06:14.865567Z","steps":["trace[783420174] 'agreement among raft nodes before linearized reading'  (duration: 104.199882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:14.866155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.413882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:14.866237Z","caller":"traceutil/trace.go:171","msg":"trace[751535580] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:456; }","duration":"101.515166ms","start":"2024-08-29T18:06:14.764713Z","end":"2024-08-29T18:06:14.866229Z","steps":["trace[751535580] 'agreement among raft nodes before linearized reading'  (duration: 101.396746ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.945390Z","caller":"traceutil/trace.go:171","msg":"trace[1123768633] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:521; }","duration":"176.463619ms","start":"2024-08-29T18:06:15.768910Z","end":"2024-08-29T18:06:15.945374Z","steps":["trace[1123768633] 'read index received'  (duration: 77.240649ms)","trace[1123768633] 'applied index is now lower than readState.Index'  (duration: 99.222386ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:06:15.945619Z","caller":"traceutil/trace.go:171","msg":"trace[1692666756] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"191.612828ms","start":"2024-08-29T18:06:15.753992Z","end":"2024-08-29T18:06:15.945605Z","steps":["trace[1692666756] 'process raft request'  (duration: 92.148406ms)","trace[1692666756] 'compare'  (duration: 98.998238ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:06:15.945833Z","caller":"traceutil/trace.go:171","msg":"trace[866514615] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"181.389131ms","start":"2024-08-29T18:06:15.764436Z","end":"2024-08-29T18:06:15.945825Z","steps":["trace[866514615] 'process raft request'  (duration: 180.806444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:15.946042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.150959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:15.946098Z","caller":"traceutil/trace.go:171","msg":"trace[2012632869] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:515; }","duration":"192.218501ms","start":"2024-08-29T18:06:15.753869Z","end":"2024-08-29T18:06:15.946088Z","steps":["trace[2012632869] 'agreement among raft nodes before linearized reading'  (duration: 192.106939ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946172Z","caller":"traceutil/trace.go:171","msg":"trace[142374409] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"101.01965ms","start":"2024-08-29T18:06:15.845144Z","end":"2024-08-29T18:06:15.946163Z","steps":["trace[142374409] 'process raft request'  (duration: 100.171837ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946262Z","caller":"traceutil/trace.go:171","msg":"trace[1373251369] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"101.10566ms","start":"2024-08-29T18:06:15.845146Z","end":"2024-08-29T18:06:15.946252Z","steps":["trace[1373251369] 'process raft request'  (duration: 100.19817ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946280Z","caller":"traceutil/trace.go:171","msg":"trace[650896772] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"181.671652ms","start":"2024-08-29T18:06:15.764601Z","end":"2024-08-29T18:06:15.946273Z","steps":["trace[650896772] 'process raft request'  (duration: 180.68021ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:15.947176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.060944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/local-path\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:15.947209Z","caller":"traceutil/trace.go:171","msg":"trace[2050858094] range","detail":"{range_begin:/registry/storageclasses/local-path; range_end:; response_count:0; response_revision:518; }","duration":"102.103563ms","start":"2024-08-29T18:06:15.845096Z","end":"2024-08-29T18:06:15.947200Z","steps":["trace[2050858094] 'agreement among raft nodes before linearized reading'  (duration: 101.928133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:09.800169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.050132ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031540939107167 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gadget/gadget-xpbfc\" mod_revision:1165 > success:<request_put:<key:\"/registry/pods/gadget/gadget-xpbfc\" value_size:12390 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-xpbfc\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-29T18:07:09.800248Z","caller":"traceutil/trace.go:171","msg":"trace[1408882006] linearizableReadLoop","detail":"{readStateIndex:1206; appliedIndex:1205; }","duration":"133.531974ms","start":"2024-08-29T18:07:09.666705Z","end":"2024-08-29T18:07:09.800237Z","steps":["trace[1408882006] 'read index received'  (duration: 19.946533ms)","trace[1408882006] 'applied index is now lower than readState.Index'  (duration: 113.584532ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:07:09.800310Z","caller":"traceutil/trace.go:171","msg":"trace[645338846] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"199.150213ms","start":"2024-08-29T18:07:09.601149Z","end":"2024-08-29T18:07:09.800300Z","steps":["trace[645338846] 'process raft request'  (duration: 85.48217ms)","trace[645338846] 'compare'  (duration: 112.96922ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:07:09.800446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.733421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-proxy-56c89.17f0453e2283edaa\" ","response":"range_response_count:1 size:811"}
	{"level":"info","ts":"2024-08-29T18:07:09.800570Z","caller":"traceutil/trace.go:171","msg":"trace[1967698203] range","detail":"{range_begin:/registry/events/kube-system/registry-proxy-56c89.17f0453e2283edaa; range_end:; response_count:1; response_revision:1175; }","duration":"133.858756ms","start":"2024-08-29T18:07:09.666695Z","end":"2024-08-29T18:07:09.800554Z","steps":["trace[1967698203] 'agreement among raft nodes before linearized reading'  (duration: 133.655669ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:32.539822Z","caller":"traceutil/trace.go:171","msg":"trace[474774062] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"116.907065ms","start":"2024-08-29T18:07:32.422893Z","end":"2024-08-29T18:07:32.539801Z","steps":["trace[474774062] 'process raft request'  (duration: 116.785483ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:02.407524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1637}
	{"level":"info","ts":"2024-08-29T18:16:02.431918Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1637,"took":"23.970648ms","hash":2632862633,"current-db-size-bytes":6815744,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":3559424,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-08-29T18:16:02.431962Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2632862633,"revision":1637,"compact-revision":-1}
	
	
	==> gcp-auth [751a953e0230f7226fd0d5854c1b2e02172545fd27536cb15928df5e0e27c66c] <==
	2024/08/29 18:08:00 Ready to write response ...
	2024/08/29 18:16:13 Ready to marshal response ...
	2024/08/29 18:16:13 Ready to write response ...
	2024/08/29 18:16:14 Ready to marshal response ...
	2024/08/29 18:16:14 Ready to write response ...
	2024/08/29 18:16:23 Ready to marshal response ...
	2024/08/29 18:16:23 Ready to write response ...
	2024/08/29 18:16:42 Ready to marshal response ...
	2024/08/29 18:16:42 Ready to write response ...
	2024/08/29 18:17:04 Ready to marshal response ...
	2024/08/29 18:17:04 Ready to write response ...
	2024/08/29 18:17:07 Ready to marshal response ...
	2024/08/29 18:17:07 Ready to write response ...
	2024/08/29 18:17:07 Ready to marshal response ...
	2024/08/29 18:17:07 Ready to write response ...
	2024/08/29 18:17:15 Ready to marshal response ...
	2024/08/29 18:17:15 Ready to write response ...
	2024/08/29 18:17:40 Ready to marshal response ...
	2024/08/29 18:17:40 Ready to write response ...
	2024/08/29 18:17:40 Ready to marshal response ...
	2024/08/29 18:17:40 Ready to write response ...
	2024/08/29 18:17:40 Ready to marshal response ...
	2024/08/29 18:17:40 Ready to write response ...
	2024/08/29 18:18:33 Ready to marshal response ...
	2024/08/29 18:18:33 Ready to write response ...
	
	
	==> kernel <==
	 18:18:43 up  2:01,  0 users,  load average: 0.16, 0.32, 0.34
	Linux addons-970414 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] <==
	I0829 18:16:39.547968       1 main.go:299] handling current node
	I0829 18:16:49.547083       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:49.547113       1 main.go:299] handling current node
	I0829 18:16:59.546493       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:16:59.546524       1 main.go:299] handling current node
	I0829 18:17:09.546089       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:17:09.546135       1 main.go:299] handling current node
	I0829 18:17:19.547107       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:17:19.547140       1 main.go:299] handling current node
	I0829 18:17:29.548121       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:17:29.548157       1 main.go:299] handling current node
	I0829 18:17:39.547917       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:17:39.547959       1 main.go:299] handling current node
	I0829 18:17:49.547503       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:17:49.547543       1 main.go:299] handling current node
	I0829 18:17:59.546375       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:17:59.546416       1 main.go:299] handling current node
	I0829 18:18:09.546061       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:18:09.546093       1 main.go:299] handling current node
	I0829 18:18:19.546088       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:18:19.546127       1 main.go:299] handling current node
	I0829 18:18:29.555155       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:18:29.555187       1 main.go:299] handling current node
	I0829 18:18:39.547800       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:18:39.547854       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] <==
	 > logger="UnhandledError"
	E0829 18:07:50.129455       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.191.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.191.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.191.20:443: connect: connection refused" logger="UnhandledError"
	I0829 18:07:50.162059       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 18:16:08.739816       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:16:09.755954       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:16:14.377413       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:16:14.646684       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.164.80"}
	I0829 18:16:33.632960       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 18:16:58.558923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.558971       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.571597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.645767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.645925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.645986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.653426       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.653571       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.671184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.671217       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:16:59.646907       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:16:59.671973       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0829 18:16:59.768892       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0829 18:17:05.584431       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.27:50460: read: connection reset by peer
	E0829 18:17:31.357611       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 18:17:40.550312       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.202.155"}
	I0829 18:18:33.360808       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.143.28"}
	
	
	==> kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] <==
	I0829 18:17:44.262563       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="67.97µs"
	I0829 18:17:44.276219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.791748ms"
	I0829 18:17:44.276317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="51.523µs"
	I0829 18:17:50.013891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.199µs"
	W0829 18:17:57.433292       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:57.433329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:00.111718       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0829 18:18:03.470301       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0829 18:18:07.430842       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:07.430888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:10.573035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-970414"
	W0829 18:18:15.384377       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:15.384412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:18.696415       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:18.696452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:33.153744       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.720146ms"
	I0829 18:18:33.157619       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.8246ms"
	I0829 18:18:33.157696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.877µs"
	I0829 18:18:33.161376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="35.028µs"
	I0829 18:18:34.386732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.796431ms"
	I0829 18:18:34.386892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.595µs"
	I0829 18:18:34.863826       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0829 18:18:34.866320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.409µs"
	I0829 18:18:34.867568       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0829 18:18:41.073899       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-970414"
	
	
	==> kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] <==
	I0829 18:06:14.059690       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:06:15.156032       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0829 18:06:15.158564       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:15.952517       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:06:15.952637       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:15.966318       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:15.967679       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:15.967714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:15.969000       1 config.go:197] "Starting service config controller"
	I0829 18:06:15.969038       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:15.969060       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:15.969064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:15.969485       1 config.go:326] "Starting node config controller"
	I0829 18:06:15.969491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:16.146707       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:06:16.150259       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:16.150276       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] <==
	W0829 18:06:03.754191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:03.755458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:03.755510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:03.755547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:06:03.755582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:03.755613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.755907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0829 18:06:03.755927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0829 18:06:03.755940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:03.755944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:03.755960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:03.755964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.755928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:03.756013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.756050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:03.756071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:04.767168       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:04.767208       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:06:04.816545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:04.816611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0829 18:06:06.651649       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:18:34 addons-970414 kubelet[1626]: I0829 18:18:34.304036    1626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4sqzp\" (UniqueName: \"kubernetes.io/projected/6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1-kube-api-access-4sqzp\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:18:34 addons-970414 kubelet[1626]: I0829 18:18:34.358006    1626 scope.go:117] "RemoveContainer" containerID="56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40"
	Aug 29 18:18:34 addons-970414 kubelet[1626]: I0829 18:18:34.373750    1626 scope.go:117] "RemoveContainer" containerID="56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40"
	Aug 29 18:18:34 addons-970414 kubelet[1626]: E0829 18:18:34.374228    1626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40\": container with ID starting with 56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40 not found: ID does not exist" containerID="56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40"
	Aug 29 18:18:34 addons-970414 kubelet[1626]: I0829 18:18:34.374276    1626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40"} err="failed to get container status \"56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40\": rpc error: code = NotFound desc = could not find container \"56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40\": container with ID starting with 56e6e682132441f9ef8868843bb6c177fb6024e540da87f680d62e528a08fa40 not found: ID does not exist"
	Aug 29 18:18:34 addons-970414 kubelet[1626]: I0829 18:18:34.381213    1626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-28jdv" podStartSLOduration=0.717310946 podStartE2EDuration="1.38119429s" podCreationTimestamp="2024-08-29 18:18:33 +0000 UTC" firstStartedPulling="2024-08-29 18:18:33.494635411 +0000 UTC m=+747.435024642" lastFinishedPulling="2024-08-29 18:18:34.158518766 +0000 UTC m=+748.098907986" observedRunningTime="2024-08-29 18:18:34.381016352 +0000 UTC m=+748.321405598" watchObservedRunningTime="2024-08-29 18:18:34.38119429 +0000 UTC m=+748.321583526"
	Aug 29 18:18:36 addons-970414 kubelet[1626]: I0829 18:18:36.155964    1626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1" path="/var/lib/kubelet/pods/6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1/volumes"
	Aug 29 18:18:36 addons-970414 kubelet[1626]: I0829 18:18:36.156359    1626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40b9c23-5222-42b6-9b42-13c333f2b251" path="/var/lib/kubelet/pods/b40b9c23-5222-42b6-9b42-13c333f2b251/volumes"
	Aug 29 18:18:36 addons-970414 kubelet[1626]: I0829 18:18:36.156674    1626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbaf322d-56ab-4104-b0fd-2e0db722dc1f" path="/var/lib/kubelet/pods/fbaf322d-56ab-4104-b0fd-2e0db722dc1f/volumes"
	Aug 29 18:18:36 addons-970414 kubelet[1626]: E0829 18:18:36.397653    1626 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955516397412319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:18:36 addons-970414 kubelet[1626]: E0829 18:18:36.397690    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955516397412319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.127905    1626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvvnh\" (UniqueName: \"kubernetes.io/projected/4e53ad5a-0419-423f-baf6-3ccfce3a4256-kube-api-access-hvvnh\") pod \"4e53ad5a-0419-423f-baf6-3ccfce3a4256\" (UID: \"4e53ad5a-0419-423f-baf6-3ccfce3a4256\") "
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.127949    1626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e53ad5a-0419-423f-baf6-3ccfce3a4256-webhook-cert\") pod \"4e53ad5a-0419-423f-baf6-3ccfce3a4256\" (UID: \"4e53ad5a-0419-423f-baf6-3ccfce3a4256\") "
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.129736    1626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e53ad5a-0419-423f-baf6-3ccfce3a4256-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4e53ad5a-0419-423f-baf6-3ccfce3a4256" (UID: "4e53ad5a-0419-423f-baf6-3ccfce3a4256"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.129776    1626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e53ad5a-0419-423f-baf6-3ccfce3a4256-kube-api-access-hvvnh" (OuterVolumeSpecName: "kube-api-access-hvvnh") pod "4e53ad5a-0419-423f-baf6-3ccfce3a4256" (UID: "4e53ad5a-0419-423f-baf6-3ccfce3a4256"). InnerVolumeSpecName "kube-api-access-hvvnh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.156263    1626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e53ad5a-0419-423f-baf6-3ccfce3a4256" path="/var/lib/kubelet/pods/4e53ad5a-0419-423f-baf6-3ccfce3a4256/volumes"
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.229168    1626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hvvnh\" (UniqueName: \"kubernetes.io/projected/4e53ad5a-0419-423f-baf6-3ccfce3a4256-kube-api-access-hvvnh\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.229199    1626 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e53ad5a-0419-423f-baf6-3ccfce3a4256-webhook-cert\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.368510    1626 scope.go:117] "RemoveContainer" containerID="87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd"
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.381706    1626 scope.go:117] "RemoveContainer" containerID="87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd"
	Aug 29 18:18:38 addons-970414 kubelet[1626]: E0829 18:18:38.382028    1626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd\": container with ID starting with 87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd not found: ID does not exist" containerID="87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd"
	Aug 29 18:18:38 addons-970414 kubelet[1626]: I0829 18:18:38.382063    1626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd"} err="failed to get container status \"87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd\": rpc error: code = NotFound desc = could not find container \"87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd\": container with ID starting with 87396c3a6a26a4f50f337adabedb5eee0b87d6d5332f927f281dbd66c4c237dd not found: ID does not exist"
	Aug 29 18:18:40 addons-970414 kubelet[1626]: E0829 18:18:40.216105    1626 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 29 18:18:40 addons-970414 kubelet[1626]: E0829 18:18:40.216240    1626 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:busybox,Image:gcr.io/k8s-minikube/busybox:1.28.4-glibc,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wnnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name
:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod busybox_default(ddd0079b-3cc0-46e0-bbb3-756312e7522b): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" logger="UnhandledError"
	Aug 29 18:18:40 addons-970414 kubelet[1626]: E0829 18:18:40.217391    1626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: authentication failed\"" pod="default/busybox" podUID="ddd0079b-3cc0-46e0-bbb3-756312e7522b"
	
	
	==> storage-provisioner [fc284d6f42abd5ee85cea3d425a167f1747f738b8330187c43ca42227f77adb7] <==
	I0829 18:06:30.446216       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:06:30.457153       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:06:30.457203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:06:30.464533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:06:30.464681       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278!
	I0829 18:06:30.464679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f572a30f-1e05-4d7e-a66a-2b263d676001", APIVersion:"v1", ResourceVersion:"937", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278 became leader
	I0829 18:06:30.565419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-970414 -n addons-970414
helpers_test.go:261: (dbg) Run:  kubectl --context addons-970414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-970414 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-970414 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-970414/192.168.49.2
	Start Time:       Thu, 29 Aug 2024 18:08:00 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9wnnt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9wnnt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/busybox to addons-970414
	  Normal   Pulling    9m15s (x4 over 10m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m15s (x4 over 10m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m15s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m1s (x6 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    30s (x42 over 10m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (149.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (326.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.20592ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-jss9n" [a866f6c5-ff40-4062-986b-ddae9310879c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003053679s
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (62.88158ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 9m58.36990019s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (71.715112ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 10m2.86593169s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (65.385204ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 10m8.335198759s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (67.875856ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 10m15.0576869s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (62.514384ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 10m25.132704238s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (61.247823ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 10m32.955534333s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (60.95801ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 10m45.898671401s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (71.554482ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 11m22.430907s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (61.741783ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 11m51.149928285s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (61.141055ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 13m18.348910677s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (59.898673ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 14m2.207724279s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-970414 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-970414 top pods -n kube-system: exit status 1 (59.243609ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-jxrb9, age: 15m16.115758348s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-970414
helpers_test.go:235: (dbg) docker inspect addons-970414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f",
	        "Created": "2024-08-29T18:05:50.989469594Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 34227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-29T18:05:51.114817177Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:33319d96a2f78fe466b6d8cbd88671515fca2b1eded3ce0b5f6d545b670a78ac",
	        "ResolvConfPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/hostname",
	        "HostsPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/hosts",
	        "LogPath": "/var/lib/docker/containers/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f-json.log",
	        "Name": "/addons-970414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-970414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-970414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef-init/diff:/var/lib/docker/overlay2/05fc462985fa2f024c01de3a02bf0ead4c06c5840250f2e5986b9e50a75da4c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9fa8791b213d0aa9aa8bbb725639f5cf4627e25f25fd0b9c0eeb7c4318c02ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-970414",
	                "Source": "/var/lib/docker/volumes/addons-970414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-970414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-970414",
	                "name.minikube.sigs.k8s.io": "addons-970414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "978d127d7df61acbbd8935def9a64eff58519190d009a49d3457d2ba97b12a1f",
	            "SandboxKey": "/var/run/docker/netns/978d127d7df6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-970414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c2cbcee4e25a4578dadcd50e3b7deda46b3aa188961837c3614b63db18a2f3b7",
	                    "EndpointID": "4a8075a86adc8f2be9df3038096489cf43023ca173ac09f522f3ebac0bd13872",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-970414",
	                        "41a3cf6921c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-970414 -n addons-970414
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 logs -n 25: (1.100390354s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-806390 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | download-docker-806390                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-806390                                                                   | download-docker-806390 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-708315   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | binary-mirror-708315                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45431                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-708315                                                                     | binary-mirror-708315   | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-970414 --wait=true                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-970414 ssh curl -s                                                                   | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-970414 addons                                                                        | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons                                                                        | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-970414 ip                                                                            | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-970414 ssh cat                                                                       | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | /opt/local-path-provisioner/pvc-ca648e25-cf9d-4c60-9189-df073bc95d42_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-970414                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | addons-970414                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-970414                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-970414 ip                                                                            | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-970414 addons disable                                                                | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-970414 addons                                                                        | addons-970414          | jenkins | v1.33.1 | 29 Aug 24 18:21 UTC | 29 Aug 24 18:21 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:27.001060   33471 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:27.001195   33471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.001206   33471 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:27.001213   33471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.001566   33471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:05:27.002146   33471 out.go:352] Setting JSON to false
	I0829 18:05:27.002926   33471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6478,"bootTime":1724948249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:27.002981   33471 start.go:139] virtualization: kvm guest
	I0829 18:05:27.004975   33471 out.go:177] * [addons-970414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:27.006205   33471 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:05:27.006225   33471 notify.go:220] Checking for updates...
	I0829 18:05:27.008297   33471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:27.009428   33471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:05:27.010459   33471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:05:27.011630   33471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:05:27.012666   33471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:05:27.013855   33471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:27.034066   33471 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:27.034178   33471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:27.081939   33471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:05:27.073820971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:27.082037   33471 docker.go:307] overlay module found
	I0829 18:05:27.083769   33471 out.go:177] * Using the docker driver based on user configuration
	I0829 18:05:27.084831   33471 start.go:297] selected driver: docker
	I0829 18:05:27.084843   33471 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:27.084856   33471 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:05:27.085566   33471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:27.128935   33471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:05:27.120299564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:27.129150   33471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:27.129407   33471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:05:27.130954   33471 out.go:177] * Using Docker driver with root privileges
	I0829 18:05:27.132457   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:05:27.132474   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:05:27.132483   33471 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 18:05:27.132551   33471 start.go:340] cluster config:
	{Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:27.134145   33471 out.go:177] * Starting "addons-970414" primary control-plane node in "addons-970414" cluster
	I0829 18:05:27.135511   33471 cache.go:121] Beginning downloading kic base image for docker with crio
	I0829 18:05:27.137027   33471 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:05:27.138262   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:27.138302   33471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:05:27.138309   33471 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:27.138353   33471 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:05:27.138388   33471 preload.go:172] Found /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:05:27.138398   33471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:05:27.138727   33471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json ...
	I0829 18:05:27.138747   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json: {Name:mke2d7298c74312a04e88e452c7a2b0ef6f2c5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:27.153622   33471 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:27.153732   33471 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:05:27.153749   33471 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0829 18:05:27.153754   33471 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0829 18:05:27.153762   33471 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0829 18:05:27.153769   33471 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0829 18:05:38.808665   33471 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0829 18:05:38.808699   33471 cache.go:194] Successfully downloaded all kic artifacts
	I0829 18:05:38.808727   33471 start.go:360] acquireMachinesLock for addons-970414: {Name:mkb69a163e0d8e2549bad474fa195b7110791498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:38.808834   33471 start.go:364] duration metric: took 89.086µs to acquireMachinesLock for "addons-970414"
	I0829 18:05:38.808859   33471 start.go:93] Provisioning new machine with config: &{Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:05:38.808941   33471 start.go:125] createHost starting for "" (driver="docker")
	I0829 18:05:38.810903   33471 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0829 18:05:38.811159   33471 start.go:159] libmachine.API.Create for "addons-970414" (driver="docker")
	I0829 18:05:38.811196   33471 client.go:168] LocalClient.Create starting
	I0829 18:05:38.811308   33471 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem
	I0829 18:05:38.888624   33471 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem
	I0829 18:05:39.225744   33471 cli_runner.go:164] Run: docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0829 18:05:39.242445   33471 cli_runner.go:211] docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0829 18:05:39.242507   33471 network_create.go:284] running [docker network inspect addons-970414] to gather additional debugging logs...
	I0829 18:05:39.242525   33471 cli_runner.go:164] Run: docker network inspect addons-970414
	W0829 18:05:39.257100   33471 cli_runner.go:211] docker network inspect addons-970414 returned with exit code 1
	I0829 18:05:39.257130   33471 network_create.go:287] error running [docker network inspect addons-970414]: docker network inspect addons-970414: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-970414 not found
	I0829 18:05:39.257147   33471 network_create.go:289] output of [docker network inspect addons-970414]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-970414 not found
	
	** /stderr **
	I0829 18:05:39.257238   33471 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:05:39.272618   33471 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a7c8d0}
	I0829 18:05:39.272664   33471 network_create.go:124] attempt to create docker network addons-970414 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0829 18:05:39.272707   33471 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-970414 addons-970414
	I0829 18:05:39.331357   33471 network_create.go:108] docker network addons-970414 192.168.49.0/24 created
	I0829 18:05:39.331388   33471 kic.go:121] calculated static IP "192.168.49.2" for the "addons-970414" container
	I0829 18:05:39.331435   33471 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0829 18:05:39.346156   33471 cli_runner.go:164] Run: docker volume create addons-970414 --label name.minikube.sigs.k8s.io=addons-970414 --label created_by.minikube.sigs.k8s.io=true
	I0829 18:05:39.361798   33471 oci.go:103] Successfully created a docker volume addons-970414
	I0829 18:05:39.361884   33471 cli_runner.go:164] Run: docker run --rm --name addons-970414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --entrypoint /usr/bin/test -v addons-970414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0829 18:05:46.571826   33471 cli_runner.go:217] Completed: docker run --rm --name addons-970414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --entrypoint /usr/bin/test -v addons-970414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (7.209903568s)
	I0829 18:05:46.571853   33471 oci.go:107] Successfully prepared a docker volume addons-970414
	I0829 18:05:46.571874   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:46.571894   33471 kic.go:194] Starting extracting preloaded images to volume ...
	I0829 18:05:46.571970   33471 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-970414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0829 18:05:50.930587   33471 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-970414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (4.358576097s)
	I0829 18:05:50.930618   33471 kic.go:203] duration metric: took 4.358721922s to extract preloaded images to volume ...
	W0829 18:05:50.930753   33471 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0829 18:05:50.930875   33471 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0829 18:05:50.975554   33471 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-970414 --name addons-970414 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-970414 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-970414 --network addons-970414 --ip 192.168.49.2 --volume addons-970414:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0829 18:05:51.268886   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Running}}
	I0829 18:05:51.285523   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.304601   33471 cli_runner.go:164] Run: docker exec addons-970414 stat /var/lib/dpkg/alternatives/iptables
	I0829 18:05:51.347960   33471 oci.go:144] the created container "addons-970414" has a running status.
	I0829 18:05:51.347988   33471 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa...
	I0829 18:05:51.440365   33471 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0829 18:05:51.459363   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.476716   33471 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0829 18:05:51.476740   33471 kic_runner.go:114] Args: [docker exec --privileged addons-970414 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0829 18:05:51.517330   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:05:51.534066   33471 machine.go:93] provisionDockerMachine start ...
	I0829 18:05:51.534151   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:51.554839   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:51.555038   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:51.555054   33471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:05:51.555753   33471 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39654->127.0.0.1:32768: read: connection reset by peer
	I0829 18:05:54.683865   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-970414
	
	I0829 18:05:54.683900   33471 ubuntu.go:169] provisioning hostname "addons-970414"
	I0829 18:05:54.683958   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:54.699445   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:54.699631   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:54.699643   33471 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-970414 && echo "addons-970414" | sudo tee /etc/hostname
	I0829 18:05:54.830897   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-970414
	
	I0829 18:05:54.830993   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:54.847116   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:54.847297   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:54.847323   33471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-970414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-970414/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-970414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:05:54.972384   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:05:54.972411   33471 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19531-25336/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-25336/.minikube}
	I0829 18:05:54.972428   33471 ubuntu.go:177] setting up certificates
	I0829 18:05:54.972440   33471 provision.go:84] configureAuth start
	I0829 18:05:54.972492   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:54.988585   33471 provision.go:143] copyHostCerts
	I0829 18:05:54.988673   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/ca.pem (1078 bytes)
	I0829 18:05:54.988829   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/cert.pem (1123 bytes)
	I0829 18:05:54.988951   33471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-25336/.minikube/key.pem (1679 bytes)
	I0829 18:05:54.989024   33471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem org=jenkins.addons-970414 san=[127.0.0.1 192.168.49.2 addons-970414 localhost minikube]
	I0829 18:05:55.147597   33471 provision.go:177] copyRemoteCerts
	I0829 18:05:55.147661   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:05:55.147709   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.165771   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.256506   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:05:55.276475   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:05:55.296322   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:05:55.315859   33471 provision.go:87] duration metric: took 343.406508ms to configureAuth
	I0829 18:05:55.315880   33471 ubuntu.go:193] setting minikube options for container-runtime
	I0829 18:05:55.316058   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:05:55.316165   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.332100   33471 main.go:141] libmachine: Using SSH client type: native
	I0829 18:05:55.332269   33471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:05:55.332292   33471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:05:55.536223   33471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:05:55.536246   33471 machine.go:96] duration metric: took 4.002156332s to provisionDockerMachine
	I0829 18:05:55.536256   33471 client.go:171] duration metric: took 16.725048882s to LocalClient.Create
	I0829 18:05:55.536279   33471 start.go:167] duration metric: took 16.725121559s to libmachine.API.Create "addons-970414"
	I0829 18:05:55.536289   33471 start.go:293] postStartSetup for "addons-970414" (driver="docker")
	I0829 18:05:55.536302   33471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:05:55.536358   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:05:55.536404   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.552022   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.640805   33471 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:05:55.643619   33471 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:05:55.643648   33471 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:05:55.643657   33471 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:05:55.643662   33471 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0829 18:05:55.643672   33471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-25336/.minikube/addons for local assets ...
	I0829 18:05:55.643725   33471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-25336/.minikube/files for local assets ...
	I0829 18:05:55.643751   33471 start.go:296] duration metric: took 107.457009ms for postStartSetup
	I0829 18:05:55.643994   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:55.660003   33471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/config.json ...
	I0829 18:05:55.660247   33471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:05:55.660293   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.675451   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.760973   33471 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0829 18:05:55.764592   33471 start.go:128] duration metric: took 16.955640874s to createHost
	I0829 18:05:55.764614   33471 start.go:83] releasing machines lock for "addons-970414", held for 16.955766323s
	I0829 18:05:55.764673   33471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-970414
	I0829 18:05:55.780103   33471 ssh_runner.go:195] Run: cat /version.json
	I0829 18:05:55.780144   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.780194   33471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:05:55.780253   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:05:55.797444   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.797887   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:05:55.953349   33471 ssh_runner.go:195] Run: systemctl --version
	I0829 18:05:55.957132   33471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:05:56.091366   33471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:05:56.095285   33471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:05:56.111209   33471 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0829 18:05:56.111281   33471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:05:56.134706   33471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0829 18:05:56.134730   33471 start.go:495] detecting cgroup driver to use...
	I0829 18:05:56.134763   33471 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:05:56.134812   33471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:05:56.147385   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:05:56.156613   33471 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:05:56.156666   33471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:05:56.168092   33471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:05:56.179938   33471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:05:56.252028   33471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:05:56.327750   33471 docker.go:233] disabling docker service ...
	I0829 18:05:56.327807   33471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:05:56.343956   33471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:05:56.353288   33471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:05:56.427251   33471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:05:56.508717   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:05:56.518265   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:05:56.531476   33471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:05:56.531549   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.539410   33471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:05:56.539458   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.547577   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.555487   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.563452   33471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:05:56.570823   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.578587   33471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.591295   33471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:05:56.599128   33471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:05:56.605733   33471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:05:56.612545   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:05:56.686246   33471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:05:56.769888   33471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:05:56.769948   33471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:05:56.772991   33471 start.go:563] Will wait 60s for crictl version
	I0829 18:05:56.773031   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:05:56.775690   33471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:05:56.808215   33471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0829 18:05:56.808328   33471 ssh_runner.go:195] Run: crio --version
	I0829 18:05:56.840217   33471 ssh_runner.go:195] Run: crio --version
	I0829 18:05:56.872925   33471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0829 18:05:56.874122   33471 cli_runner.go:164] Run: docker network inspect addons-970414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:05:56.889469   33471 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0829 18:05:56.892591   33471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:05:56.901877   33471 kubeadm.go:883] updating cluster {Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:05:56.902001   33471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:56.902058   33471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:05:56.960945   33471 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:05:56.960966   33471 crio.go:433] Images already preloaded, skipping extraction
	I0829 18:05:56.961005   33471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:05:56.996565   33471 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:05:56.996586   33471 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:05:56.996594   33471 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0829 18:05:56.996695   33471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-970414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:05:56.996788   33471 ssh_runner.go:195] Run: crio config
	I0829 18:05:57.034951   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:05:57.034976   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:05:57.035004   33471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:05:57.035037   33471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-970414 NodeName:addons-970414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:05:57.035200   33471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-970414"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:05:57.035264   33471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:05:57.043209   33471 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:05:57.043270   33471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:05:57.050815   33471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0829 18:05:57.065626   33471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:05:57.080858   33471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0829 18:05:57.095282   33471 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0829 18:05:57.098211   33471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:05:57.107337   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:05:57.174389   33471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:05:57.185656   33471 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414 for IP: 192.168.49.2
	I0829 18:05:57.185680   33471 certs.go:194] generating shared ca certs ...
	I0829 18:05:57.185701   33471 certs.go:226] acquiring lock for ca certs: {Name:mk67594a2aeddd90511e83e94fdec27741c5c194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.185831   33471 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key
	I0829 18:05:57.302579   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt ...
	I0829 18:05:57.302605   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt: {Name:mk68fcaae893468c94d7a84507010792fe808d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.302749   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key ...
	I0829 18:05:57.302759   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key: {Name:mk3ae49953961c47a1211facb56e8bc731cb5d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.302828   33471 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key
	I0829 18:05:57.397161   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt ...
	I0829 18:05:57.397188   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt: {Name:mkdea41367fabcd2965e87aed60d5a189212f9be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.397327   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key ...
	I0829 18:05:57.397337   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key: {Name:mk92e8ff155ca7dda7fa018998615e51c8a854aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.397397   33471 certs.go:256] generating profile certs ...
	I0829 18:05:57.397452   33471 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key
	I0829 18:05:57.397465   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt with IP's: []
	I0829 18:05:57.456687   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt ...
	I0829 18:05:57.456714   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: {Name:mkca0def83df75bdcbf967a5612ca78646681086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.456865   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key ...
	I0829 18:05:57.456879   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.key: {Name:mk7a68ec7addac3a4cb5327ed442f621166ad28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.456954   33471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7
	I0829 18:05:57.456972   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0829 18:05:57.557157   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 ...
	I0829 18:05:57.557189   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7: {Name:mk1e987fdce57178fa8bc6d220419e4e702f2022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.557369   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7 ...
	I0829 18:05:57.557386   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7: {Name:mkcb99136185dcb54ad76bcdd5f51f3bb874c708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.557477   33471 certs.go:381] copying /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt.e98266b7 -> /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt
	I0829 18:05:57.557565   33471 certs.go:385] copying /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key.e98266b7 -> /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key
	I0829 18:05:57.557628   33471 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key
	I0829 18:05:57.557653   33471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt with IP's: []
	I0829 18:05:57.665009   33471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt ...
	I0829 18:05:57.665035   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt: {Name:mka7b9add077f78b858c255a0787554628ae81a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.665204   33471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key ...
	I0829 18:05:57.665218   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key: {Name:mkf9f0b064442d85a7a36a00447d2e06028bbb5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:57.665423   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:05:57.665464   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:05:57.665500   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:05:57.665529   33471 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-25336/.minikube/certs/key.pem (1679 bytes)
	I0829 18:05:57.666108   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:05:57.687482   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0829 18:05:57.707435   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:05:57.727015   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 18:05:57.746595   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:05:57.766741   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:05:57.786768   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:05:57.806898   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:05:57.827052   33471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:05:57.847405   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:05:57.862668   33471 ssh_runner.go:195] Run: openssl version
	I0829 18:05:57.867441   33471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:05:57.875492   33471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.878530   33471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.878584   33471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:05:57.884877   33471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:05:57.892902   33471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:05:57.895580   33471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:05:57.895625   33471 kubeadm.go:392] StartCluster: {Name:addons-970414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-970414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:57.895692   33471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:05:57.895727   33471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:05:57.927582   33471 cri.go:89] found id: ""
	I0829 18:05:57.927651   33471 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:05:57.935503   33471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:05:57.943410   33471 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0829 18:05:57.943456   33471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:05:57.950627   33471 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:05:57.950644   33471 kubeadm.go:157] found existing configuration files:
	
	I0829 18:05:57.950673   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:05:57.957427   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:05:57.957467   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:05:57.964066   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:05:57.971025   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:05:57.971075   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:05:57.977703   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:05:57.984450   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:05:57.984488   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:05:57.991201   33471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:05:57.998415   33471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:05:57.998451   33471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:05:58.005349   33471 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0829 18:05:58.038494   33471 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:05:58.038555   33471 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:05:58.053584   33471 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0829 18:05:58.053680   33471 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0829 18:05:58.053730   33471 kubeadm.go:310] OS: Linux
	I0829 18:05:58.053800   33471 kubeadm.go:310] CGROUPS_CPU: enabled
	I0829 18:05:58.053884   33471 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0829 18:05:58.053987   33471 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0829 18:05:58.054064   33471 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0829 18:05:58.054137   33471 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0829 18:05:58.054208   33471 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0829 18:05:58.054265   33471 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0829 18:05:58.054348   33471 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0829 18:05:58.054436   33471 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0829 18:05:58.098180   33471 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:05:58.098301   33471 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:05:58.098433   33471 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:05:58.103771   33471 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:05:58.106952   33471 out.go:235]   - Generating certificates and keys ...
	I0829 18:05:58.107046   33471 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:05:58.107111   33471 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:05:58.350564   33471 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:05:58.490294   33471 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:05:58.689041   33471 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:05:58.823978   33471 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:05:58.996208   33471 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:05:58.996351   33471 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-970414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:05:59.072936   33471 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:05:59.073085   33471 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-970414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:05:59.434980   33471 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:05:59.665647   33471 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:05:59.738102   33471 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:05:59.738192   33471 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:05:59.867228   33471 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:00.066025   33471 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:00.133026   33471 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:00.270509   33471 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:00.374793   33471 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:00.375247   33471 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:00.377672   33471 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:00.379594   33471 out.go:235]   - Booting up control plane ...
	I0829 18:06:00.379700   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:00.379784   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:00.379861   33471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:00.387817   33471 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:00.392895   33471 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:00.392953   33471 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:00.472796   33471 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:00.472952   33471 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:00.974304   33471 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.649814ms
	I0829 18:06:00.974388   33471 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:05.476183   33471 kubeadm.go:310] [api-check] The API server is healthy after 4.501825265s
	I0829 18:06:05.486362   33471 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:05.496924   33471 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:05.512283   33471 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:05.512547   33471 kubeadm.go:310] [mark-control-plane] Marking the node addons-970414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:05.518748   33471 kubeadm.go:310] [bootstrap-token] Using token: jzv7iv.d89b87p5nvbumrzo
	I0829 18:06:05.520189   33471 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:05.520291   33471 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:05.522825   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:05.527262   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:05.530214   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:05.532304   33471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:05.534332   33471 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:05.883610   33471 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:06.302786   33471 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:06.881022   33471 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:06.881690   33471 kubeadm.go:310] 
	I0829 18:06:06.881760   33471 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:06.881773   33471 kubeadm.go:310] 
	I0829 18:06:06.881882   33471 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:06.881912   33471 kubeadm.go:310] 
	I0829 18:06:06.881972   33471 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:06.882062   33471 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:06.882212   33471 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:06.882230   33471 kubeadm.go:310] 
	I0829 18:06:06.882324   33471 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:06.882338   33471 kubeadm.go:310] 
	I0829 18:06:06.882403   33471 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:06.882413   33471 kubeadm.go:310] 
	I0829 18:06:06.882485   33471 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:06.882586   33471 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:06.882657   33471 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:06.882663   33471 kubeadm.go:310] 
	I0829 18:06:06.882741   33471 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:06.882807   33471 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:06.882813   33471 kubeadm.go:310] 
	I0829 18:06:06.882918   33471 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jzv7iv.d89b87p5nvbumrzo \
	I0829 18:06:06.883051   33471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ded35ef35e12d5a5396aa817ddf8ddaebf53b89969d35d052dfa46966e0eb6d3 \
	I0829 18:06:06.883081   33471 kubeadm.go:310] 	--control-plane 
	I0829 18:06:06.883091   33471 kubeadm.go:310] 
	I0829 18:06:06.883194   33471 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:06.883202   33471 kubeadm.go:310] 
	I0829 18:06:06.883319   33471 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jzv7iv.d89b87p5nvbumrzo \
	I0829 18:06:06.883476   33471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ded35ef35e12d5a5396aa817ddf8ddaebf53b89969d35d052dfa46966e0eb6d3 
	I0829 18:06:06.885210   33471 kubeadm.go:310] W0829 18:05:58.036060    1290 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:06.885484   33471 kubeadm.go:310] W0829 18:05:58.036646    1290 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:06.885706   33471 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0829 18:06:06.885836   33471 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:06.885860   33471 cni.go:84] Creating CNI manager for ""
	I0829 18:06:06.885869   33471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:06:06.887826   33471 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0829 18:06:06.888997   33471 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0829 18:06:06.892550   33471 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0829 18:06:06.892565   33471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0829 18:06:06.908633   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0829 18:06:07.090336   33471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:07.090410   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:07.090410   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-970414 minikube.k8s.io/updated_at=2024_08_29T18_06_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-970414 minikube.k8s.io/primary=true
	I0829 18:06:07.097357   33471 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:07.161653   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:07.662656   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:08.162155   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:08.662485   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:09.161763   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:09.662365   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:10.162060   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:10.662667   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:11.161738   33471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:11.225686   33471 kubeadm.go:1113] duration metric: took 4.135333724s to wait for elevateKubeSystemPrivileges
	I0829 18:06:11.225730   33471 kubeadm.go:394] duration metric: took 13.330107637s to StartCluster
	I0829 18:06:11.225753   33471 settings.go:142] acquiring lock: {Name:mk30ad9b0ff80001a546f289c6cc726b4c74119c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:11.225898   33471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:06:11.226419   33471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-25336/kubeconfig: {Name:mk79bdfdd62fbbebbe9b38ab62c3c3cce586ee25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:11.226636   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:11.226662   33471 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:11.226708   33471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:11.226817   33471 addons.go:69] Setting yakd=true in profile "addons-970414"
	I0829 18:06:11.226855   33471 addons.go:69] Setting inspektor-gadget=true in profile "addons-970414"
	I0829 18:06:11.226879   33471 addons.go:69] Setting metrics-server=true in profile "addons-970414"
	I0829 18:06:11.226895   33471 addons.go:234] Setting addon metrics-server=true in "addons-970414"
	I0829 18:06:11.226899   33471 addons.go:234] Setting addon inspektor-gadget=true in "addons-970414"
	I0829 18:06:11.226924   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226936   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226947   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:11.227018   33471 addons.go:69] Setting storage-provisioner=true in profile "addons-970414"
	I0829 18:06:11.227040   33471 addons.go:234] Setting addon storage-provisioner=true in "addons-970414"
	I0829 18:06:11.227065   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227153   33471 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-970414"
	I0829 18:06:11.227185   33471 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-970414"
	I0829 18:06:11.227245   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227436   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227450   33471 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-970414"
	I0829 18:06:11.227475   33471 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-970414"
	I0829 18:06:11.227599   33471 addons.go:69] Setting volcano=true in profile "addons-970414"
	I0829 18:06:11.227602   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227615   33471 addons.go:69] Setting registry=true in profile "addons-970414"
	I0829 18:06:11.227633   33471 addons.go:234] Setting addon volcano=true in "addons-970414"
	I0829 18:06:11.227658   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227660   33471 addons.go:234] Setting addon registry=true in "addons-970414"
	I0829 18:06:11.227676   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227689   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.227696   33471 addons.go:69] Setting volumesnapshots=true in profile "addons-970414"
	I0829 18:06:11.227718   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227722   33471 addons.go:234] Setting addon volumesnapshots=true in "addons-970414"
	I0829 18:06:11.227771   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.228076   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228080   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228209   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.228356   33471 addons.go:69] Setting gcp-auth=true in profile "addons-970414"
	I0829 18:06:11.228388   33471 mustload.go:65] Loading cluster: addons-970414
	I0829 18:06:11.228584   33471 config.go:182] Loaded profile config "addons-970414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:11.228856   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.229427   33471 addons.go:69] Setting ingress=true in profile "addons-970414"
	I0829 18:06:11.229880   33471 addons.go:234] Setting addon ingress=true in "addons-970414"
	I0829 18:06:11.230054   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.226869   33471 addons.go:234] Setting addon yakd=true in "addons-970414"
	I0829 18:06:11.232989   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.233478   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.227436   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.230761   33471 addons.go:69] Setting ingress-dns=true in profile "addons-970414"
	I0829 18:06:11.234357   33471 addons.go:234] Setting addon ingress-dns=true in "addons-970414"
	I0829 18:06:11.230771   33471 addons.go:69] Setting helm-tiller=true in profile "addons-970414"
	I0829 18:06:11.234426   33471 addons.go:234] Setting addon helm-tiller=true in "addons-970414"
	I0829 18:06:11.234428   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.234448   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.230778   33471 addons.go:69] Setting default-storageclass=true in profile "addons-970414"
	I0829 18:06:11.234865   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.234865   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.234897   33471 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-970414"
	I0829 18:06:11.235176   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.235691   33471 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:11.230855   33471 addons.go:69] Setting cloud-spanner=true in profile "addons-970414"
	I0829 18:06:11.230860   33471 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-970414"
	I0829 18:06:11.236330   33471 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-970414"
	I0829 18:06:11.236358   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.232013   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.236617   33471 addons.go:234] Setting addon cloud-spanner=true in "addons-970414"
	I0829 18:06:11.236656   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.238585   33471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:11.273627   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	W0829 18:06:11.273734   33471 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:06:11.274066   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.279122   33471 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:11.280278   33471 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:11.280382   33471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:11.280402   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:11.280450   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.280843   33471 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-970414"
	I0829 18:06:11.280884   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.281352   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.282826   33471 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:11.284222   33471 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:11.284250   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:11.284308   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.284471   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.287508   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:11.291534   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:11.291568   33471 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:11.291622   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.293330   33471 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:11.295302   33471 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:11.295320   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:11.295376   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.299261   33471 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:11.300709   33471 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:11.300725   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:11.300791   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.300909   33471 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:11.302087   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:11.302105   33471 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:11.302160   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.307761   33471 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:11.309677   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:11.309700   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:11.309766   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.320621   33471 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:11.325003   33471 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:11.325029   33471 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:11.325160   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.326386   33471 addons.go:234] Setting addon default-storageclass=true in "addons-970414"
	I0829 18:06:11.326435   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:11.326941   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:11.339593   33471 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:11.339663   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:11.342553   33471 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:11.342633   33471 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:11.344001   33471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:11.344018   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:11.344070   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.344221   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:11.344232   33471 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:11.344271   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.344391   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:11.346102   33471 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:11.347855   33471 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:11.348296   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:11.348371   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.350422   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:11.351792   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:11.354381   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:11.355688   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:11.357044   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:11.358332   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:11.360150   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:11.362855   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:11.364094   33471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:11.364346   33471 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:11.364366   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:11.364422   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.366038   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:11.366057   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:11.366122   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.368590   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.368828   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.377128   33471 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:11.377144   33471 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:11.377195   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:11.382256   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.392162   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.401881   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411536   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411725   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.411872   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.412557   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.413514   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.414653   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.415906   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.417956   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:11.421100   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	W0829 18:06:11.447767   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.447799   33471 retry.go:31] will retry after 276.757001ms: ssh: handshake failed: EOF
	W0829 18:06:11.449293   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.449316   33471 retry.go:31] will retry after 138.739567ms: ssh: handshake failed: EOF
	I0829 18:06:11.457483   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:11.569695   33471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0829 18:06:11.646095   33471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:06:11.646126   33471 retry.go:31] will retry after 425.215295ms: ssh: handshake failed: EOF
	I0829 18:06:11.667860   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:11.667890   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:11.765345   33471 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:11.765373   33471 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:11.848126   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:11.848497   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:11.848514   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:11.859073   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:11.859100   33471 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:11.863017   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:11.864173   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:11.948210   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:11.948298   33471 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:11.948267   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:11.948345   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:11.948424   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:11.951036   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:11.951054   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:11.955551   33471 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:11.955617   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:11.965568   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:11.967321   33471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:11.967346   33471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:12.047508   33471 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:12.047545   33471 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:12.060080   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:12.060105   33471 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:12.145272   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:12.145358   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:12.153120   33471 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:12.153146   33471 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:12.167673   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:12.256341   33471 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:12.256372   33471 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:12.346507   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:12.346537   33471 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:12.351483   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:12.355630   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:12.358674   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:12.358700   33471 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:12.464885   33471 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.007354776s)
	I0829 18:06:12.464974   33471 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:12.465902   33471 node_ready.go:35] waiting up to 6m0s for node "addons-970414" to be "Ready" ...
	I0829 18:06:12.554150   33471 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:12.554184   33471 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:12.564392   33471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:12.564475   33471 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:12.647807   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:12.647836   33471 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:12.651834   33471 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:12.651871   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:12.659639   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:12.659667   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:12.850643   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:12.850731   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:12.954879   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:13.046953   33471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:13.046981   33471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:13.050318   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:13.061740   33471 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-970414" context rescaled to 1 replicas
	I0829 18:06:13.161545   33471 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:13.161570   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:13.352888   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:13.359173   33471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:13.359202   33471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:13.368369   33471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:13.368396   33471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:13.446352   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:13.658489   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:13.658522   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:13.863922   33471 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:13.863951   33471 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:14.153008   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:14.153084   33471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:14.265801   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:14.265888   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:14.346440   33471 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:14.346546   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:14.457711   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:14.467018   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:14.467092   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:14.664740   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:15.054818   33471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:15.054890   33471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:15.449637   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:15.751232   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.903064634s)
	I0829 18:06:15.751343   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.888297403s)
	I0829 18:06:16.167149   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.302940806s)
	I0829 18:06:16.167480   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.219108729s)
	I0829 18:06:16.167583   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.201985375s)
	I0829 18:06:16.167666   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.999962951s)
	I0829 18:06:16.167708   33471 addons.go:475] Verifying addon registry=true in "addons-970414"
	I0829 18:06:16.167991   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.81647825s)
	I0829 18:06:16.168188   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.812528468s)
	I0829 18:06:16.169994   33471 out.go:177] * Verifying registry addon...
	I0829 18:06:16.172294   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:06:16.355174   33471 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:16.355543   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0829 18:06:16.453902   33471 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0829 18:06:16.760111   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:17.052900   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:17.348900   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:17.746877   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.247659   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.568953   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:06:18.569108   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:18.586232   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:18.748308   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:18.768683   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.813695623s)
	W0829 18:06:18.768747   33471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:18.768797   33471 retry.go:31] will retry after 129.631111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:06:18.768934   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.718574104s)
	I0829 18:06:18.768956   33471 addons.go:475] Verifying addon metrics-server=true in "addons-970414"
	I0829 18:06:18.769122   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.416207866s)
	I0829 18:06:18.769138   33471 addons.go:475] Verifying addon ingress=true in "addons-970414"
	I0829 18:06:18.769584   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.323191353s)
	I0829 18:06:18.769666   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.311841863s)
	I0829 18:06:18.772109   33471 out.go:177] * Verifying ingress addon...
	I0829 18:06:18.772111   33471 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-970414 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:06:18.774901   33471 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:06:18.784226   33471 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:06:18.784247   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:18.864874   33471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:06:18.881665   33471 addons.go:234] Setting addon gcp-auth=true in "addons-970414"
	I0829 18:06:18.881720   33471 host.go:66] Checking if "addons-970414" exists ...
	I0829 18:06:18.882075   33471 cli_runner.go:164] Run: docker container inspect addons-970414 --format={{.State.Status}}
	I0829 18:06:18.899292   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:18.901129   33471 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:06:18.901171   33471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-970414
	I0829 18:06:18.920489   33471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/addons-970414/id_rsa Username:docker}
	I0829 18:06:19.177567   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:19.286115   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:19.554486   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:19.749842   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:19.848969   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.399253398s)
	I0829 18:06:19.849250   33471 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-970414"
	I0829 18:06:19.851236   33471 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:06:19.854515   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:06:19.869263   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:19.870161   33471 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:19.870184   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:20.176202   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:20.279572   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:20.357721   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:20.675473   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:20.778813   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:20.857772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.176058   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:21.279045   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:21.357952   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.676019   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:21.778347   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:21.854733   33471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.955399804s)
	I0829 18:06:21.854798   33471 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.953652114s)
	I0829 18:06:21.857145   33471 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:06:21.857500   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:21.859828   33471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:21.861259   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:06:21.861280   33471 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:06:21.879467   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:06:21.879489   33471 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:06:21.895886   33471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:21.895909   33471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:06:21.954214   33471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:06:21.969114   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:22.176618   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:22.279610   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:22.358000   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:22.553735   33471 addons.go:475] Verifying addon gcp-auth=true in "addons-970414"
	I0829 18:06:22.555569   33471 out.go:177] * Verifying gcp-auth addon...
	I0829 18:06:22.558244   33471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:06:22.560579   33471 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:06:22.560596   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:22.674902   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:22.778700   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:22.858118   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:23.061602   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:23.175002   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:23.278641   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:23.358524   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:23.561370   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:23.675585   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:23.778306   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:23.857475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:24.061119   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:24.175442   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:24.278372   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:24.357538   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:24.468405   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:24.562284   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:24.676070   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:24.778626   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:24.857813   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:25.061006   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:25.175734   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:25.278745   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:25.357486   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:25.562423   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:25.675499   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:25.778380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:25.857418   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:26.061541   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:26.174800   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:26.278575   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:26.357690   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:26.468882   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:26.561126   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:26.675797   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:26.778490   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:26.857597   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:27.061577   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:27.174998   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:27.278808   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:27.357899   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:27.561262   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:27.675554   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:27.778294   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:27.857440   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.061639   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:28.175012   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:28.278856   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:28.358354   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.560629   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:28.674835   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:28.778609   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:28.857628   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:28.968635   33471 node_ready.go:53] node "addons-970414" has status "Ready":"False"
	I0829 18:06:29.060906   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:29.175160   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:29.279076   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:29.358063   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:29.561636   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:29.674927   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:29.779426   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:29.863822   33471 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:06:29.863848   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:29.969260   33471 node_ready.go:49] node "addons-970414" has status "Ready":"True"
	I0829 18:06:29.969289   33471 node_ready.go:38] duration metric: took 17.50332165s for node "addons-970414" to be "Ready" ...
	I0829 18:06:29.969301   33471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:29.977908   33471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:30.061963   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:30.176070   33471 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:06:30.176093   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:30.279944   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:30.381182   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:30.561917   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:30.675717   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:30.779380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:30.858908   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.061158   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:31.175903   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:31.278733   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:31.360013   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.483381   33471 pod_ready.go:93] pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.483402   33471 pod_ready.go:82] duration metric: took 1.505470075s for pod "coredns-6f6b679f8f-jxrb9" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.483421   33471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.487161   33471 pod_ready.go:93] pod "etcd-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.487178   33471 pod_ready.go:82] duration metric: took 3.750939ms for pod "etcd-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.487191   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.490614   33471 pod_ready.go:93] pod "kube-apiserver-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.490632   33471 pod_ready.go:82] duration metric: took 3.434179ms for pod "kube-apiserver-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.490640   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.493931   33471 pod_ready.go:93] pod "kube-controller-manager-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.493950   33471 pod_ready.go:82] duration metric: took 3.301077ms for pod "kube-controller-manager-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.493962   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mwgq4" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.561772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:31.569942   33471 pod_ready.go:93] pod "kube-proxy-mwgq4" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.569964   33471 pod_ready.go:82] duration metric: took 75.994271ms for pod "kube-proxy-mwgq4" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.569973   33471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.676604   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:31.779535   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:31.859414   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:31.970319   33471 pod_ready.go:93] pod "kube-scheduler-addons-970414" in "kube-system" namespace has status "Ready":"True"
	I0829 18:06:31.970345   33471 pod_ready.go:82] duration metric: took 400.364012ms for pod "kube-scheduler-addons-970414" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:31.970358   33471 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:32.062142   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:32.175938   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:32.279359   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:32.358320   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:32.562203   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:32.675175   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:32.779380   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:32.858562   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.061806   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:33.175414   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:33.278190   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:33.359497   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.566753   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:33.679816   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:33.780038   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:33.859085   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:33.976545   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:34.061607   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:34.175533   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:34.278647   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:34.358847   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:34.562865   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:34.676116   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:34.778980   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:34.859383   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.061690   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:35.175979   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:35.278700   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:35.358987   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.561990   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:35.676053   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:35.778889   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:35.859309   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:35.978326   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:36.061789   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:36.175701   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:36.278911   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:36.358733   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:36.561288   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:36.675973   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:36.778702   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:36.859052   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:37.062147   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:37.175953   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:37.278732   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:37.358897   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:37.562562   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:37.677246   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:37.779993   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:37.858836   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:38.061840   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:38.175582   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:38.279853   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:38.358730   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:38.475807   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:38.562000   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:38.675376   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:38.779020   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:38.858866   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:39.061799   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:39.175516   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:39.278386   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:39.358349   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:39.561877   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:39.675407   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:39.778631   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:39.858049   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:40.061166   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:40.175901   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.279026   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.361677   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:40.476589   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:40.562707   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:40.677196   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:40.778687   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:40.858582   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.062646   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:41.179136   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.278942   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.359243   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:41.561503   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:41.676508   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:41.779738   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:41.859475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.062106   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:42.176135   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.279258   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.358777   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.562048   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:42.675925   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:42.779048   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:42.879772   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:42.975713   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:43.061010   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:43.175551   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.279093   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.358897   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:43.562475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:43.675599   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:43.778529   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:43.858277   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.062101   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:44.176457   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.279344   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.357937   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.562224   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:44.676679   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:44.779034   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:44.858759   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:44.976061   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:45.061405   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:45.176561   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.278694   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.358550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:45.562365   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:45.675919   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:45.778988   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:45.858884   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.061118   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:46.175480   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.278388   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.358500   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.561876   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:46.676217   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:46.779623   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:46.858934   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:46.976665   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:47.062438   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:47.176856   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.279274   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.360207   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:47.562049   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:47.676310   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:47.847611   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:47.860403   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.061438   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:48.176542   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.279914   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:48.358708   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:48.561468   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:48.676103   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:48.779307   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:48.858934   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.062411   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:49.175774   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.279108   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.358770   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:49.475745   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:49.561498   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:49.676506   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:49.779122   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:49.859246   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.061522   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:50.184207   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.285183   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.359392   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:50.563222   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:50.676338   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:50.779289   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:50.859315   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.063561   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:51.175786   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.278876   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.359522   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:51.477135   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:51.561730   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:51.675433   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:51.779706   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:51.858484   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.061448   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:52.176160   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.279349   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.380355   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:52.561333   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:52.675905   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:52.778605   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:52.858471   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.061429   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:53.176294   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.279494   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.358900   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.561935   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:53.675675   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:53.780447   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:53.858317   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:53.975085   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:54.061527   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:54.176015   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.278916   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.358728   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:54.561195   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:54.676074   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:54.778888   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:54.858526   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.061961   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:55.175994   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.278912   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.358696   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.562439   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:55.676087   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:55.779100   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:55.858417   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:55.975459   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:56.060830   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:56.175297   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.279178   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:56.358860   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:56.561356   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:56.676270   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:56.779497   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:56.859993   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.062783   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:57.254123   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.348605   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.359917   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:57.561267   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:57.748519   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:57.849949   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:57.859389   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.049715   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:06:58.061798   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:58.176534   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.348936   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.359169   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:58.561969   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:58.676240   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:58.779911   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:58.858659   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.062278   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:59.176444   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.279797   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.359146   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:06:59.561362   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:06:59.676652   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:06:59.778887   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:06:59.859071   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.061841   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:00.176029   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.278919   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:00.359145   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:00.476358   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:00.562430   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:00.676749   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:00.778262   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:00.859251   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.061470   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:01.176363   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.279417   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.361332   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:01.562496   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:01.676178   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:01.779058   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:01.859261   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.061640   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:02.175950   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.279315   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.359088   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:02.476615   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:02.561997   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:02.675860   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.778891   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:02.859381   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.061658   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:03.175437   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.279450   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.380178   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:03.561274   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:03.676141   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.778914   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:03.858550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.061119   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:04.175986   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.279413   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.358524   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:04.476911   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:04.561419   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:04.676126   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.779641   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.859408   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.061403   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:05.176552   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.278788   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.358106   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.561720   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:05.677343   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.779750   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.858550   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.061549   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:06.176475   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.279830   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.358299   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.561385   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:06.676305   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.779396   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.858256   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.976151   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:07.062281   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.176114   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.279243   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.359098   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.561770   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.675691   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.778345   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.858383   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.062024   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.175973   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.278626   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.359845   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.562272   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.676299   33471 kapi.go:107] duration metric: took 52.503998136s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:08.779614   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.858729   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.061667   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.278948   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.358825   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.475603   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:09.561133   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.803043   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.869349   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.061639   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.279248   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.358623   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.561862   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.779245   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.858210   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.062082   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.279187   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.380296   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.476169   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:11.562124   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.780090   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.859518   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.061848   33471 kapi.go:107] duration metric: took 49.50360321s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:07:12.064235   33471 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-970414 cluster.
	I0829 18:07:12.065845   33471 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:07:12.067312   33471 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:07:12.279829   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.380390   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.781279   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.858496   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.279475   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.357989   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.476371   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:13.778868   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.859012   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.278985   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.358948   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.778506   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.858490   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.279669   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.358327   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.778714   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.859145   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.975951   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:16.279416   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.358353   33471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.778955   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.879919   33471 kapi.go:107] duration metric: took 57.025400666s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:07:17.278506   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.779629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.279662   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.475735   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:18.778865   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.279629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.778833   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.279629   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.476070   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:20.779310   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.278746   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.778588   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.279091   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.778744   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.975809   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:23.279672   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.778698   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.279136   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.779600   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.975845   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:25.279527   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.778694   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.279166   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.778678   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.976229   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:27.279572   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.779925   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.278543   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.778902   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.279513   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.475862   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:29.778825   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.278410   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.779205   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.278785   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.778310   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.975687   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:32.279208   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.778950   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.278632   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.778869   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.975755   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:34.279008   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.849062   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.279182   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.849707   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.047727   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:36.348740   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.779662   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.279104   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.779192   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.279217   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.476596   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:38.778967   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.279557   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.778520   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.279154   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.781434   33471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.976165   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:41.300505   33471 kapi.go:107] duration metric: took 1m22.525606095s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:07:41.302197   33471 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, helm-tiller, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0829 18:07:41.303840   33471 addons.go:510] duration metric: took 1m30.077118852s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin helm-tiller storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0829 18:07:43.475312   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:45.475559   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:47.975734   33471 pod_ready.go:103] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:50.475293   33471 pod_ready.go:93] pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:50.475315   33471 pod_ready.go:82] duration metric: took 1m18.504950495s for pod "metrics-server-8988944d9-jss9n" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.475325   33471 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.479409   33471 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:50.479430   33471 pod_ready.go:82] duration metric: took 4.09992ms for pod "nvidia-device-plugin-daemonset-njmrn" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:50.479449   33471 pod_ready.go:39] duration metric: took 1m20.510134495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:50.479465   33471 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:50.479496   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:50.479553   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:50.512656   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:50.512676   33471 cri.go:89] found id: ""
	I0829 18:07:50.512684   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:50.512723   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.515973   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:50.516034   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:50.548643   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:50.548662   33471 cri.go:89] found id: ""
	I0829 18:07:50.548669   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:50.548718   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.551901   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:50.551963   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:50.583669   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:50.583702   33471 cri.go:89] found id: ""
	I0829 18:07:50.583709   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:50.583748   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.586859   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:50.586933   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:50.618860   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:50.618883   33471 cri.go:89] found id: ""
	I0829 18:07:50.618890   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:50.618930   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.622032   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:50.622084   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:50.653704   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:50.653729   33471 cri.go:89] found id: ""
	I0829 18:07:50.653740   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:50.653792   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.657019   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:50.657077   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:50.690012   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:50.690036   33471 cri.go:89] found id: ""
	I0829 18:07:50.690045   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:50.690086   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.693191   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:50.693236   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:50.726118   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:50.726139   33471 cri.go:89] found id: ""
	I0829 18:07:50.726149   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:50.726190   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:50.729505   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:50.729526   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:50.767861   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:50.767892   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:50.779540   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:50.779567   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:50.822562   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:50.822592   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:50.872590   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:50.872628   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:50.904925   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:50.904951   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:50.960999   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:50.961033   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:50.993169   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:50.993195   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:51.072501   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:51.072533   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:51.113527   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:51.113556   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:51.183067   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:51.183100   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:51.281419   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:51.281446   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:53.816429   33471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:53.829736   33471 api_server.go:72] duration metric: took 1m42.603041834s to wait for apiserver process to appear ...
	I0829 18:07:53.829767   33471 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:53.829801   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:53.829844   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:53.862325   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:53.862351   33471 cri.go:89] found id: ""
	I0829 18:07:53.862361   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:53.862409   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.865569   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:53.865646   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:53.898226   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:53.898247   33471 cri.go:89] found id: ""
	I0829 18:07:53.898255   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:53.898296   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.901566   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:53.901628   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:53.934199   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:53.934218   33471 cri.go:89] found id: ""
	I0829 18:07:53.934225   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:53.934265   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.937354   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:53.937402   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:53.970450   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:53.970472   33471 cri.go:89] found id: ""
	I0829 18:07:53.970479   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:53.970524   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:53.973830   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:53.973887   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:54.006146   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:54.006169   33471 cri.go:89] found id: ""
	I0829 18:07:54.006177   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:54.006224   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.009454   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:54.009512   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:54.041172   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:54.041191   33471 cri.go:89] found id: ""
	I0829 18:07:54.041198   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:54.041249   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.044312   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:54.044368   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:54.083976   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:54.084001   33471 cri.go:89] found id: ""
	I0829 18:07:54.084009   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:54.084049   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:54.087300   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:54.087324   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:54.098754   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:54.098782   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:54.161684   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:54.161716   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:54.241049   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:54.241085   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:54.273621   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:54.273646   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:54.331096   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:54.331132   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:54.363448   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:54.363477   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:54.431857   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:54.431896   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:54.528063   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:54.528089   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:54.577648   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:54.577681   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:54.611916   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:54.611946   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:54.647955   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:54.647983   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:57.189075   33471 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0829 18:07:57.192542   33471 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0829 18:07:57.193379   33471 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:57.193402   33471 api_server.go:131] duration metric: took 3.363628924s to wait for apiserver health ...
	I0829 18:07:57.193411   33471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:57.193432   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 18:07:57.193471   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 18:07:57.225819   33471 cri.go:89] found id: "b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:57.225841   33471 cri.go:89] found id: ""
	I0829 18:07:57.225850   33471 logs.go:276] 1 containers: [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54]
	I0829 18:07:57.225896   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.228901   33471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 18:07:57.228944   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 18:07:57.260637   33471 cri.go:89] found id: "5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:57.260656   33471 cri.go:89] found id: ""
	I0829 18:07:57.260663   33471 logs.go:276] 1 containers: [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca]
	I0829 18:07:57.260704   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.263753   33471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 18:07:57.263801   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 18:07:57.294974   33471 cri.go:89] found id: "3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:57.294997   33471 cri.go:89] found id: ""
	I0829 18:07:57.295006   33471 logs.go:276] 1 containers: [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250]
	I0829 18:07:57.295058   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.298097   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 18:07:57.298155   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 18:07:57.329667   33471 cri.go:89] found id: "cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:57.329690   33471 cri.go:89] found id: ""
	I0829 18:07:57.329698   33471 logs.go:276] 1 containers: [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40]
	I0829 18:07:57.329749   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.332928   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 18:07:57.332984   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 18:07:57.364944   33471 cri.go:89] found id: "f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:57.364962   33471 cri.go:89] found id: ""
	I0829 18:07:57.364970   33471 logs.go:276] 1 containers: [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd]
	I0829 18:07:57.365005   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.368114   33471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 18:07:57.368166   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 18:07:57.401257   33471 cri.go:89] found id: "70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:57.401276   33471 cri.go:89] found id: ""
	I0829 18:07:57.401283   33471 logs.go:276] 1 containers: [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7]
	I0829 18:07:57.401332   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.404460   33471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 18:07:57.404506   33471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 18:07:57.435578   33471 cri.go:89] found id: "fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:07:57.435600   33471 cri.go:89] found id: ""
	I0829 18:07:57.435607   33471 logs.go:276] 1 containers: [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f]
	I0829 18:07:57.435647   33471 ssh_runner.go:195] Run: which crictl
	I0829 18:07:57.438689   33471 logs.go:123] Gathering logs for kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] ...
	I0829 18:07:57.438711   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7"
	I0829 18:07:57.493400   33471 logs.go:123] Gathering logs for CRI-O ...
	I0829 18:07:57.493428   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 18:07:57.565541   33471 logs.go:123] Gathering logs for kubelet ...
	I0829 18:07:57.565577   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 18:07:57.635720   33471 logs.go:123] Gathering logs for dmesg ...
	I0829 18:07:57.635750   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 18:07:57.647194   33471 logs.go:123] Gathering logs for kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] ...
	I0829 18:07:57.647217   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54"
	I0829 18:07:57.689192   33471 logs.go:123] Gathering logs for etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] ...
	I0829 18:07:57.689228   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca"
	I0829 18:07:57.738329   33471 logs.go:123] Gathering logs for coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] ...
	I0829 18:07:57.738357   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250"
	I0829 18:07:57.771675   33471 logs.go:123] Gathering logs for kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] ...
	I0829 18:07:57.771698   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd"
	I0829 18:07:57.802656   33471 logs.go:123] Gathering logs for container status ...
	I0829 18:07:57.802684   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 18:07:57.842425   33471 logs.go:123] Gathering logs for describe nodes ...
	I0829 18:07:57.842451   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 18:07:57.937146   33471 logs.go:123] Gathering logs for kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] ...
	I0829 18:07:57.937174   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40"
	I0829 18:07:57.974724   33471 logs.go:123] Gathering logs for kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] ...
	I0829 18:07:57.974752   33471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f"
	I0829 18:08:00.516381   33471 system_pods.go:59] 19 kube-system pods found
	I0829 18:08:00.516420   33471 system_pods.go:61] "coredns-6f6b679f8f-jxrb9" [99ffdce3-4a2f-4216-95ca-28db164333a2] Running
	I0829 18:08:00.516426   33471 system_pods.go:61] "csi-hostpath-attacher-0" [b33c21ec-bc06-47b0-b7b4-78c5392d31f7] Running
	I0829 18:08:00.516431   33471 system_pods.go:61] "csi-hostpath-resizer-0" [ae955038-1da8-4d77-a461-9dccfe623922] Running
	I0829 18:08:00.516437   33471 system_pods.go:61] "csi-hostpathplugin-5wlj7" [c7f02d44-110a-4971-b90a-521977151630] Running
	I0829 18:08:00.516442   33471 system_pods.go:61] "etcd-addons-970414" [8daf5c22-02d4-44e0-8a5c-0d5b9c0cd7b5] Running
	I0829 18:08:00.516447   33471 system_pods.go:61] "kindnet-95zg6" [612be856-b5ad-4571-9908-168f86f5b273] Running
	I0829 18:08:00.516452   33471 system_pods.go:61] "kube-apiserver-addons-970414" [549d4f3b-086e-40f7-9b7a-513220af52cd] Running
	I0829 18:08:00.516457   33471 system_pods.go:61] "kube-controller-manager-addons-970414" [00d3410f-773e-471f-9716-7fc678c6f5a3] Running
	I0829 18:08:00.516466   33471 system_pods.go:61] "kube-ingress-dns-minikube" [6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1] Running
	I0829 18:08:00.516471   33471 system_pods.go:61] "kube-proxy-mwgq4" [39ef4c84-6d42-40f2-9eb2-af13d2c9a233] Running
	I0829 18:08:00.516479   33471 system_pods.go:61] "kube-scheduler-addons-970414" [75453275-6d16-4fc0-944d-d30987bfccb2] Running
	I0829 18:08:00.516485   33471 system_pods.go:61] "metrics-server-8988944d9-jss9n" [a866f6c5-ff40-4062-986b-ddae9310879c] Running
	I0829 18:08:00.516490   33471 system_pods.go:61] "nvidia-device-plugin-daemonset-njmrn" [5c975a82-28c1-431d-b4e4-b89312486f53] Running
	I0829 18:08:00.516497   33471 system_pods.go:61] "registry-6fb4cdfc84-srp9d" [a6e6445c-947b-4527-a5b7-e1710ec0b292] Running
	I0829 18:08:00.516500   33471 system_pods.go:61] "registry-proxy-56c89" [c9c1a8d7-92a0-458c-a4fa-4271bfd8f736] Running
	I0829 18:08:00.516506   33471 system_pods.go:61] "snapshot-controller-56fcc65765-c9pzh" [b3e9483b-e20c-4b8d-b5b4-53940d1f7621] Running
	I0829 18:08:00.516509   33471 system_pods.go:61] "snapshot-controller-56fcc65765-w7vbq" [0a038557-f899-4971-87c0-4a476ae40ff9] Running
	I0829 18:08:00.516513   33471 system_pods.go:61] "storage-provisioner" [7cffe50e-abe7-4d9c-9c04-88e86ad1ffb9] Running
	I0829 18:08:00.516516   33471 system_pods.go:61] "tiller-deploy-b48cc5f79-h8shr" [53f4571a-d63e-4721-aa85-b44922772189] Running
	I0829 18:08:00.516522   33471 system_pods.go:74] duration metric: took 3.32310726s to wait for pod list to return data ...
	I0829 18:08:00.516531   33471 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:08:00.518762   33471 default_sa.go:45] found service account: "default"
	I0829 18:08:00.518781   33471 default_sa.go:55] duration metric: took 2.241797ms for default service account to be created ...
	I0829 18:08:00.518789   33471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:08:00.527444   33471 system_pods.go:86] 19 kube-system pods found
	I0829 18:08:00.527470   33471 system_pods.go:89] "coredns-6f6b679f8f-jxrb9" [99ffdce3-4a2f-4216-95ca-28db164333a2] Running
	I0829 18:08:00.527475   33471 system_pods.go:89] "csi-hostpath-attacher-0" [b33c21ec-bc06-47b0-b7b4-78c5392d31f7] Running
	I0829 18:08:00.527479   33471 system_pods.go:89] "csi-hostpath-resizer-0" [ae955038-1da8-4d77-a461-9dccfe623922] Running
	I0829 18:08:00.527483   33471 system_pods.go:89] "csi-hostpathplugin-5wlj7" [c7f02d44-110a-4971-b90a-521977151630] Running
	I0829 18:08:00.527486   33471 system_pods.go:89] "etcd-addons-970414" [8daf5c22-02d4-44e0-8a5c-0d5b9c0cd7b5] Running
	I0829 18:08:00.527490   33471 system_pods.go:89] "kindnet-95zg6" [612be856-b5ad-4571-9908-168f86f5b273] Running
	I0829 18:08:00.527493   33471 system_pods.go:89] "kube-apiserver-addons-970414" [549d4f3b-086e-40f7-9b7a-513220af52cd] Running
	I0829 18:08:00.527496   33471 system_pods.go:89] "kube-controller-manager-addons-970414" [00d3410f-773e-471f-9716-7fc678c6f5a3] Running
	I0829 18:08:00.527500   33471 system_pods.go:89] "kube-ingress-dns-minikube" [6f4f1e88-63c1-4ce5-9e13-49ba51e0d9e1] Running
	I0829 18:08:00.527503   33471 system_pods.go:89] "kube-proxy-mwgq4" [39ef4c84-6d42-40f2-9eb2-af13d2c9a233] Running
	I0829 18:08:00.527507   33471 system_pods.go:89] "kube-scheduler-addons-970414" [75453275-6d16-4fc0-944d-d30987bfccb2] Running
	I0829 18:08:00.527510   33471 system_pods.go:89] "metrics-server-8988944d9-jss9n" [a866f6c5-ff40-4062-986b-ddae9310879c] Running
	I0829 18:08:00.527514   33471 system_pods.go:89] "nvidia-device-plugin-daemonset-njmrn" [5c975a82-28c1-431d-b4e4-b89312486f53] Running
	I0829 18:08:00.527520   33471 system_pods.go:89] "registry-6fb4cdfc84-srp9d" [a6e6445c-947b-4527-a5b7-e1710ec0b292] Running
	I0829 18:08:00.527523   33471 system_pods.go:89] "registry-proxy-56c89" [c9c1a8d7-92a0-458c-a4fa-4271bfd8f736] Running
	I0829 18:08:00.527526   33471 system_pods.go:89] "snapshot-controller-56fcc65765-c9pzh" [b3e9483b-e20c-4b8d-b5b4-53940d1f7621] Running
	I0829 18:08:00.527532   33471 system_pods.go:89] "snapshot-controller-56fcc65765-w7vbq" [0a038557-f899-4971-87c0-4a476ae40ff9] Running
	I0829 18:08:00.527535   33471 system_pods.go:89] "storage-provisioner" [7cffe50e-abe7-4d9c-9c04-88e86ad1ffb9] Running
	I0829 18:08:00.527538   33471 system_pods.go:89] "tiller-deploy-b48cc5f79-h8shr" [53f4571a-d63e-4721-aa85-b44922772189] Running
	I0829 18:08:00.527546   33471 system_pods.go:126] duration metric: took 8.752911ms to wait for k8s-apps to be running ...
	I0829 18:08:00.527554   33471 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:08:00.527594   33471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:08:00.539104   33471 system_svc.go:56] duration metric: took 11.540627ms WaitForService to wait for kubelet
	I0829 18:08:00.539136   33471 kubeadm.go:582] duration metric: took 1m49.312445201s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:08:00.539157   33471 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:08:00.542184   33471 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0829 18:08:00.542215   33471 node_conditions.go:123] node cpu capacity is 8
	I0829 18:08:00.542232   33471 node_conditions.go:105] duration metric: took 3.069703ms to run NodePressure ...
	I0829 18:08:00.542247   33471 start.go:241] waiting for startup goroutines ...
	I0829 18:08:00.542258   33471 start.go:246] waiting for cluster config update ...
	I0829 18:08:00.542277   33471 start.go:255] writing updated cluster config ...
	I0829 18:08:00.542602   33471 ssh_runner.go:195] Run: rm -f paused
	I0829 18:08:00.589612   33471 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:08:00.591791   33471 out.go:177] * Done! kubectl is now configured to use "addons-970414" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:19:06 addons-970414 crio[1030]: time="2024-08-29 18:19:06.488782481Z" level=info msg="Removing pod sandbox: 763e4aa04b031b383b27bd22b0f51ae751f54335ca2502df93e06eee3d68ce4c" id=35b9e064-4cbc-4d00-b7f9-3348e1730182 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 29 18:19:06 addons-970414 crio[1030]: time="2024-08-29 18:19:06.495243191Z" level=info msg="Removed pod sandbox: 763e4aa04b031b383b27bd22b0f51ae751f54335ca2502df93e06eee3d68ce4c" id=35b9e064-4cbc-4d00-b7f9-3348e1730182 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 29 18:19:08 addons-970414 crio[1030]: time="2024-08-29 18:19:08.155798512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6dbb3c87-89d9-4957-a46e-7d415ed4bb0d name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:08 addons-970414 crio[1030]: time="2024-08-29 18:19:08.156188490Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6dbb3c87-89d9-4957-a46e-7d415ed4bb0d name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:21 addons-970414 crio[1030]: time="2024-08-29 18:19:21.155254057Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8a90779-b498-4807-85ab-4355b8f7c6c4 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:21 addons-970414 crio[1030]: time="2024-08-29 18:19:21.155468107Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b8a90779-b498-4807-85ab-4355b8f7c6c4 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:33 addons-970414 crio[1030]: time="2024-08-29 18:19:33.155344416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f761c39d-2412-4f77-84b4-ef472200686f name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:33 addons-970414 crio[1030]: time="2024-08-29 18:19:33.155549015Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f761c39d-2412-4f77-84b4-ef472200686f name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:45 addons-970414 crio[1030]: time="2024-08-29 18:19:45.155293134Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=67ab104e-ecc4-4585-be3a-22af3b3a4e03 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:45 addons-970414 crio[1030]: time="2024-08-29 18:19:45.155485465Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=67ab104e-ecc4-4585-be3a-22af3b3a4e03 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:57 addons-970414 crio[1030]: time="2024-08-29 18:19:57.155446320Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8d99336a-6554-464a-9c8f-2b81e7bb068c name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:19:57 addons-970414 crio[1030]: time="2024-08-29 18:19:57.155669311Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8d99336a-6554-464a-9c8f-2b81e7bb068c name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:12 addons-970414 crio[1030]: time="2024-08-29 18:20:12.155331640Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=41d8148c-fa57-4695-8b76-cf3eedc68ca1 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:12 addons-970414 crio[1030]: time="2024-08-29 18:20:12.155591916Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=41d8148c-fa57-4695-8b76-cf3eedc68ca1 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:27 addons-970414 crio[1030]: time="2024-08-29 18:20:27.155013048Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b9fb1e33-1ab4-417b-9889-2a15a8724a75 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:27 addons-970414 crio[1030]: time="2024-08-29 18:20:27.155254137Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b9fb1e33-1ab4-417b-9889-2a15a8724a75 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:38 addons-970414 crio[1030]: time="2024-08-29 18:20:38.155441908Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c5c325e-6d34-4514-80be-dcf86e6a5d2d name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:38 addons-970414 crio[1030]: time="2024-08-29 18:20:38.155719434Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9c5c325e-6d34-4514-80be-dcf86e6a5d2d name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:52 addons-970414 crio[1030]: time="2024-08-29 18:20:52.154999173Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=071e2ab1-ef09-4343-b5d8-c2e9984a75cc name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:20:52 addons-970414 crio[1030]: time="2024-08-29 18:20:52.155286519Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=071e2ab1-ef09-4343-b5d8-c2e9984a75cc name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:21:07 addons-970414 crio[1030]: time="2024-08-29 18:21:07.155835621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=48383b85-be31-4149-8586-b1d0ac147587 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:21:07 addons-970414 crio[1030]: time="2024-08-29 18:21:07.156058499Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=48383b85-be31-4149-8586-b1d0ac147587 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:21:20 addons-970414 crio[1030]: time="2024-08-29 18:21:20.155559779Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c31e7b6-7530-4e82-aae7-1e5f122fac60 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:21:20 addons-970414 crio[1030]: time="2024-08-29 18:21:20.155810268Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5c31e7b6-7530-4e82-aae7-1e5f122fac60 name=/runtime.v1.ImageService/ImageStatus
	Aug 29 18:21:27 addons-970414 crio[1030]: time="2024-08-29 18:21:27.534232278Z" level=info msg="Stopping container: 6888613b3e8ca92b54a9bd85c691207a041721ed01b03753f06021863d01d356 (timeout: 30s)" id=5cdcc432-fbeb-462c-89ff-0025527b03da name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	28b4757a4c973       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   68bfbf6f334af       hello-world-app-55bf9c44b4-28jdv
	03f63bd4b1c48       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   4fa70648299cc       nginx
	751a953e0230f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   a12fb4e4da859       gcp-auth-89d5ffd79-cj6cz
	6888613b3e8ca       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   14 minutes ago      Running             metrics-server            0                   5c6d6ccdb7bd8       metrics-server-8988944d9-jss9n
	3a16651d14fd4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        14 minutes ago      Running             coredns                   0                   c991950d1479a       coredns-6f6b679f8f-jxrb9
	fc284d6f42abd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago      Running             storage-provisioner       0                   1c77efb0d73c6       storage-provisioner
	fc407b261b55a       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      15 minutes ago      Running             kindnet-cni               0                   3a14aa7cbd5ba       kindnet-95zg6
	f3c75142fecd2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        15 minutes ago      Running             kube-proxy                0                   6259dfbf37c5a       kube-proxy-mwgq4
	cb91925e81486       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        15 minutes ago      Running             kube-scheduler            0                   3af0a40f28992       kube-scheduler-addons-970414
	5034cc120442d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   989f4e8da94ea       etcd-addons-970414
	70642d5cd8ef0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        15 minutes ago      Running             kube-controller-manager   0                   740a72692bfef       kube-controller-manager-addons-970414
	b65cd62e3477a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        15 minutes ago      Running             kube-apiserver            0                   1be263bee45c2       kube-apiserver-addons-970414
	
	
	==> coredns [3a16651d14fd48e904dc4e85c8d08d8d877ca6cc3b9650a29525bb09a6185250] <==
	[INFO] 10.244.0.19:41065 - 41812 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110073s
	[INFO] 10.244.0.19:33314 - 7978 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068562s
	[INFO] 10.244.0.19:33314 - 63253 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123722s
	[INFO] 10.244.0.19:33313 - 15372 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005052571s
	[INFO] 10.244.0.19:33313 - 8969 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005240532s
	[INFO] 10.244.0.19:56468 - 13948 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004476729s
	[INFO] 10.244.0.19:56468 - 34426 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005241684s
	[INFO] 10.244.0.19:36060 - 35696 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004520671s
	[INFO] 10.244.0.19:36060 - 15990 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004576926s
	[INFO] 10.244.0.19:44003 - 15478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079273s
	[INFO] 10.244.0.19:44003 - 29556 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115076s
	[INFO] 10.244.0.20:49487 - 52545 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147201s
	[INFO] 10.244.0.20:59535 - 5474 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000116485s
	[INFO] 10.244.0.20:51018 - 29008 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119345s
	[INFO] 10.244.0.20:51904 - 9903 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179576s
	[INFO] 10.244.0.20:44385 - 47503 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138771s
	[INFO] 10.244.0.20:53196 - 482 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137631s
	[INFO] 10.244.0.20:52299 - 24778 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005524264s
	[INFO] 10.244.0.20:56050 - 55826 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006091549s
	[INFO] 10.244.0.20:52775 - 61641 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004707679s
	[INFO] 10.244.0.20:52194 - 42579 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00473342s
	[INFO] 10.244.0.20:58349 - 16179 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004578594s
	[INFO] 10.244.0.20:59907 - 15287 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006119565s
	[INFO] 10.244.0.20:54560 - 33495 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00068612s
	[INFO] 10.244.0.20:50005 - 1476 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000775831s
	
	
	==> describe nodes <==
	Name:               addons-970414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-970414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-970414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-970414
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-970414
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:21:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:18:41 +0000   Thu, 29 Aug 2024 18:06:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-970414
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 f871f2a5cd3540f79b6c200227bc35ed
	  System UUID:                49e09a6c-969e-4bfb-9562-e1e953ad9e00
	  Boot ID:                    fb799716-ba24-44f3-8d84-c852ba38aeb7
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-28jdv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  gcp-auth                    gcp-auth-89d5ffd79-cj6cz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-6f6b679f8f-jxrb9                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 etcd-addons-970414                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-95zg6                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-970414             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-970414    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-mwgq4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-970414             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-8988944d9-jss9n           100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-970414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-970414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-970414 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node addons-970414 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node addons-970414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node addons-970414 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node addons-970414 event: Registered Node addons-970414 in Controller
	  Normal   NodeReady                14m                kubelet          Node addons-970414 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000853] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000677] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000668] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000729] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.580338] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.044213] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005611] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013638] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002516] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.013312] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.261359] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 18:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +1.032106] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +2.011848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +4.223585] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[  +8.191236] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[ +16.126426] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	[Aug29 18:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 9c 61 2a 16 2d aa 42 64 c6 6a 13 08 00
	
	
	==> etcd [5034cc120442dbbb0fa7a0356490896e276dbed610484c36b8da79981a31d1ca] <==
	{"level":"info","ts":"2024-08-29T18:06:14.865581Z","caller":"traceutil/trace.go:171","msg":"trace[783420174] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:456; }","duration":"104.355912ms","start":"2024-08-29T18:06:14.761212Z","end":"2024-08-29T18:06:14.865567Z","steps":["trace[783420174] 'agreement among raft nodes before linearized reading'  (duration: 104.199882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:14.866155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.413882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:14.866237Z","caller":"traceutil/trace.go:171","msg":"trace[751535580] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:456; }","duration":"101.515166ms","start":"2024-08-29T18:06:14.764713Z","end":"2024-08-29T18:06:14.866229Z","steps":["trace[751535580] 'agreement among raft nodes before linearized reading'  (duration: 101.396746ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.945390Z","caller":"traceutil/trace.go:171","msg":"trace[1123768633] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:521; }","duration":"176.463619ms","start":"2024-08-29T18:06:15.768910Z","end":"2024-08-29T18:06:15.945374Z","steps":["trace[1123768633] 'read index received'  (duration: 77.240649ms)","trace[1123768633] 'applied index is now lower than readState.Index'  (duration: 99.222386ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:06:15.945619Z","caller":"traceutil/trace.go:171","msg":"trace[1692666756] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"191.612828ms","start":"2024-08-29T18:06:15.753992Z","end":"2024-08-29T18:06:15.945605Z","steps":["trace[1692666756] 'process raft request'  (duration: 92.148406ms)","trace[1692666756] 'compare'  (duration: 98.998238ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:06:15.945833Z","caller":"traceutil/trace.go:171","msg":"trace[866514615] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"181.389131ms","start":"2024-08-29T18:06:15.764436Z","end":"2024-08-29T18:06:15.945825Z","steps":["trace[866514615] 'process raft request'  (duration: 180.806444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:15.946042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.150959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:15.946098Z","caller":"traceutil/trace.go:171","msg":"trace[2012632869] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:515; }","duration":"192.218501ms","start":"2024-08-29T18:06:15.753869Z","end":"2024-08-29T18:06:15.946088Z","steps":["trace[2012632869] 'agreement among raft nodes before linearized reading'  (duration: 192.106939ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946172Z","caller":"traceutil/trace.go:171","msg":"trace[142374409] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"101.01965ms","start":"2024-08-29T18:06:15.845144Z","end":"2024-08-29T18:06:15.946163Z","steps":["trace[142374409] 'process raft request'  (duration: 100.171837ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946262Z","caller":"traceutil/trace.go:171","msg":"trace[1373251369] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"101.10566ms","start":"2024-08-29T18:06:15.845146Z","end":"2024-08-29T18:06:15.946252Z","steps":["trace[1373251369] 'process raft request'  (duration: 100.19817ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:06:15.946280Z","caller":"traceutil/trace.go:171","msg":"trace[650896772] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"181.671652ms","start":"2024-08-29T18:06:15.764601Z","end":"2024-08-29T18:06:15.946273Z","steps":["trace[650896772] 'process raft request'  (duration: 180.68021ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:06:15.947176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.060944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/local-path\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:06:15.947209Z","caller":"traceutil/trace.go:171","msg":"trace[2050858094] range","detail":"{range_begin:/registry/storageclasses/local-path; range_end:; response_count:0; response_revision:518; }","duration":"102.103563ms","start":"2024-08-29T18:06:15.845096Z","end":"2024-08-29T18:06:15.947200Z","steps":["trace[2050858094] 'agreement among raft nodes before linearized reading'  (duration: 101.928133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:09.800169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.050132ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031540939107167 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gadget/gadget-xpbfc\" mod_revision:1165 > success:<request_put:<key:\"/registry/pods/gadget/gadget-xpbfc\" value_size:12390 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-xpbfc\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-29T18:07:09.800248Z","caller":"traceutil/trace.go:171","msg":"trace[1408882006] linearizableReadLoop","detail":"{readStateIndex:1206; appliedIndex:1205; }","duration":"133.531974ms","start":"2024-08-29T18:07:09.666705Z","end":"2024-08-29T18:07:09.800237Z","steps":["trace[1408882006] 'read index received'  (duration: 19.946533ms)","trace[1408882006] 'applied index is now lower than readState.Index'  (duration: 113.584532ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:07:09.800310Z","caller":"traceutil/trace.go:171","msg":"trace[645338846] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"199.150213ms","start":"2024-08-29T18:07:09.601149Z","end":"2024-08-29T18:07:09.800300Z","steps":["trace[645338846] 'process raft request'  (duration: 85.48217ms)","trace[645338846] 'compare'  (duration: 112.96922ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:07:09.800446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.733421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-proxy-56c89.17f0453e2283edaa\" ","response":"range_response_count:1 size:811"}
	{"level":"info","ts":"2024-08-29T18:07:09.800570Z","caller":"traceutil/trace.go:171","msg":"trace[1967698203] range","detail":"{range_begin:/registry/events/kube-system/registry-proxy-56c89.17f0453e2283edaa; range_end:; response_count:1; response_revision:1175; }","duration":"133.858756ms","start":"2024-08-29T18:07:09.666695Z","end":"2024-08-29T18:07:09.800554Z","steps":["trace[1967698203] 'agreement among raft nodes before linearized reading'  (duration: 133.655669ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:32.539822Z","caller":"traceutil/trace.go:171","msg":"trace[474774062] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"116.907065ms","start":"2024-08-29T18:07:32.422893Z","end":"2024-08-29T18:07:32.539801Z","steps":["trace[474774062] 'process raft request'  (duration: 116.785483ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:02.407524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1637}
	{"level":"info","ts":"2024-08-29T18:16:02.431918Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1637,"took":"23.970648ms","hash":2632862633,"current-db-size-bytes":6815744,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":3559424,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2024-08-29T18:16:02.431962Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2632862633,"revision":1637,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T18:21:02.412239Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2057}
	{"level":"info","ts":"2024-08-29T18:21:02.428411Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2057,"took":"15.706864ms","hash":1860476993,"current-db-size-bytes":6815744,"current-db-size":"6.8 MB","current-db-size-in-use-bytes":5165056,"current-db-size-in-use":"5.2 MB"}
	{"level":"info","ts":"2024-08-29T18:21:02.428456Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1860476993,"revision":2057,"compact-revision":1637}
	
	
	==> gcp-auth [751a953e0230f7226fd0d5854c1b2e02172545fd27536cb15928df5e0e27c66c] <==
	2024/08/29 18:08:00 Ready to write response ...
	2024/08/29 18:16:13 Ready to marshal response ...
	2024/08/29 18:16:13 Ready to write response ...
	2024/08/29 18:16:14 Ready to marshal response ...
	2024/08/29 18:16:14 Ready to write response ...
	2024/08/29 18:16:23 Ready to marshal response ...
	2024/08/29 18:16:23 Ready to write response ...
	2024/08/29 18:16:42 Ready to marshal response ...
	2024/08/29 18:16:42 Ready to write response ...
	2024/08/29 18:17:04 Ready to marshal response ...
	2024/08/29 18:17:04 Ready to write response ...
	2024/08/29 18:17:07 Ready to marshal response ...
	2024/08/29 18:17:07 Ready to write response ...
	2024/08/29 18:17:07 Ready to marshal response ...
	2024/08/29 18:17:07 Ready to write response ...
	2024/08/29 18:17:15 Ready to marshal response ...
	2024/08/29 18:17:15 Ready to write response ...
	2024/08/29 18:17:40 Ready to marshal response ...
	2024/08/29 18:17:40 Ready to write response ...
	2024/08/29 18:17:40 Ready to marshal response ...
	2024/08/29 18:17:40 Ready to write response ...
	2024/08/29 18:17:40 Ready to marshal response ...
	2024/08/29 18:17:40 Ready to write response ...
	2024/08/29 18:18:33 Ready to marshal response ...
	2024/08/29 18:18:33 Ready to write response ...
	
	
	==> kernel <==
	 18:21:28 up  2:03,  0 users,  load average: 0.02, 0.19, 0.28
	Linux addons-970414 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [fc407b261b55a78bf54620b8c2bed400d1d6006ded302d57add8e43b1f68cf0f] <==
	I0829 18:19:19.546808       1 main.go:299] handling current node
	I0829 18:19:29.552836       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:19:29.552880       1 main.go:299] handling current node
	I0829 18:19:39.547503       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:19:39.547546       1 main.go:299] handling current node
	I0829 18:19:49.546289       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:19:49.546325       1 main.go:299] handling current node
	I0829 18:19:59.553546       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:19:59.553580       1 main.go:299] handling current node
	I0829 18:20:09.550588       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:20:09.550628       1 main.go:299] handling current node
	I0829 18:20:19.546478       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:20:19.546515       1 main.go:299] handling current node
	I0829 18:20:29.554757       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:20:29.554799       1 main.go:299] handling current node
	I0829 18:20:39.555853       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:20:39.555893       1 main.go:299] handling current node
	I0829 18:20:49.548356       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:20:49.548399       1 main.go:299] handling current node
	I0829 18:20:59.550836       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:20:59.550877       1 main.go:299] handling current node
	I0829 18:21:09.548828       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:21:09.548882       1 main.go:299] handling current node
	I0829 18:21:19.546305       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0829 18:21:19.546346       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b65cd62e3477a0dede53d970c7553de09d24db0719b160d3eada7f9826118b54] <==
	 > logger="UnhandledError"
	E0829 18:07:50.129455       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.191.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.191.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.191.20:443: connect: connection refused" logger="UnhandledError"
	I0829 18:07:50.162059       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 18:16:08.739816       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:16:09.755954       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:16:14.377413       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:16:14.646684       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.164.80"}
	I0829 18:16:33.632960       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 18:16:58.558923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.558971       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.571597       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.645767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.645925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.645986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.653426       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.653571       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:16:58.671184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:16:58.671217       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:16:59.646907       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:16:59.671973       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0829 18:16:59.768892       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0829 18:17:05.584431       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.27:50460: read: connection reset by peer
	E0829 18:17:31.357611       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 18:17:40.550312       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.202.155"}
	I0829 18:18:33.360808       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.143.28"}
	
	
	==> kube-controller-manager [70642d5cd8ef0ec5206b7ba3cb3c87264fc94635f7888331b1e157fd5e5164e7] <==
	W0829 18:19:29.100281       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:29.100324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:51.964224       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:51.964262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:53.398349       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:53.398389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:54.661323       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:54.661362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:09.956278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:09.956320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:26.317690       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:26.317729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:36.763756       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:36.763794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:41.764577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:41.764614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:52.749352       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:52.749391       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:21:18.816960       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:21:18.817000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:21:20.653437       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:21:20.653473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:21:23.068477       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:21:23.068513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:21:27.524844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="4.757µs"
	
	
	==> kube-proxy [f3c75142fecd2c76b8247ec40a74b73fb689ea8a267d019c6b122778020c71bd] <==
	I0829 18:06:14.059690       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:06:15.156032       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0829 18:06:15.158564       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:15.952517       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:06:15.952637       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:15.966318       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:15.967679       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:15.967714       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:15.969000       1 config.go:197] "Starting service config controller"
	I0829 18:06:15.969038       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:15.969060       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:15.969064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:15.969485       1 config.go:326] "Starting node config controller"
	I0829 18:06:15.969491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:16.146707       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:06:16.150259       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:16.150276       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cb91925e814867079af9f0a475c89993d2c879f411b3bdcf2d08ba6f5b3c1f40] <==
	W0829 18:06:03.754191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:03.755458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:03.755510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:03.755547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:06:03.755582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.754392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:03.755613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.755907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0829 18:06:03.755927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0829 18:06:03.755940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:03.755944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:03.755960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0829 18:06:03.755964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.755928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:03.756013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:03.756050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:03.756071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:04.767168       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:04.767208       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:06:04.816545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:04.816611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0829 18:06:06.651649       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:20:26 addons-970414 kubelet[1626]: E0829 18:20:26.424582    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955626424314189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:27 addons-970414 kubelet[1626]: E0829 18:20:27.155496    1626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ddd0079b-3cc0-46e0-bbb3-756312e7522b"
	Aug 29 18:20:36 addons-970414 kubelet[1626]: E0829 18:20:36.427337    1626 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955636427121967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:36 addons-970414 kubelet[1626]: E0829 18:20:36.427384    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955636427121967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:38 addons-970414 kubelet[1626]: E0829 18:20:38.155957    1626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ddd0079b-3cc0-46e0-bbb3-756312e7522b"
	Aug 29 18:20:46 addons-970414 kubelet[1626]: E0829 18:20:46.429276    1626 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955646429070814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:46 addons-970414 kubelet[1626]: E0829 18:20:46.429314    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955646429070814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:52 addons-970414 kubelet[1626]: E0829 18:20:52.155508    1626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ddd0079b-3cc0-46e0-bbb3-756312e7522b"
	Aug 29 18:20:56 addons-970414 kubelet[1626]: E0829 18:20:56.432087    1626 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955656431899582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:56 addons-970414 kubelet[1626]: E0829 18:20:56.432118    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955656431899582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:06 addons-970414 kubelet[1626]: E0829 18:21:06.176962    1626 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f, memory: /docker/41a3cf6921c1976e27e3122e19bc7bb470b2823d95081008d1618238cfcd6b4f/system.slice/kubelet.service"
	Aug 29 18:21:06 addons-970414 kubelet[1626]: E0829 18:21:06.434935    1626 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955666434671574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:06 addons-970414 kubelet[1626]: E0829 18:21:06.434966    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955666434671574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:07 addons-970414 kubelet[1626]: E0829 18:21:07.156277    1626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ddd0079b-3cc0-46e0-bbb3-756312e7522b"
	Aug 29 18:21:16 addons-970414 kubelet[1626]: E0829 18:21:16.437638    1626 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955676437407259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:16 addons-970414 kubelet[1626]: E0829 18:21:16.437671    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955676437407259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:20 addons-970414 kubelet[1626]: E0829 18:21:20.156014    1626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ddd0079b-3cc0-46e0-bbb3-756312e7522b"
	Aug 29 18:21:26 addons-970414 kubelet[1626]: E0829 18:21:26.439556    1626 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955686439387380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:26 addons-970414 kubelet[1626]: E0829 18:21:26.439589    1626 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955686439387380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597933,},InodesUsed:&UInt64Value{Value:235,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:28 addons-970414 kubelet[1626]: I0829 18:21:28.836572    1626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lxp2\" (UniqueName: \"kubernetes.io/projected/a866f6c5-ff40-4062-986b-ddae9310879c-kube-api-access-5lxp2\") pod \"a866f6c5-ff40-4062-986b-ddae9310879c\" (UID: \"a866f6c5-ff40-4062-986b-ddae9310879c\") "
	Aug 29 18:21:28 addons-970414 kubelet[1626]: I0829 18:21:28.836620    1626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a866f6c5-ff40-4062-986b-ddae9310879c-tmp-dir\") pod \"a866f6c5-ff40-4062-986b-ddae9310879c\" (UID: \"a866f6c5-ff40-4062-986b-ddae9310879c\") "
	Aug 29 18:21:28 addons-970414 kubelet[1626]: I0829 18:21:28.836924    1626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a866f6c5-ff40-4062-986b-ddae9310879c-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "a866f6c5-ff40-4062-986b-ddae9310879c" (UID: "a866f6c5-ff40-4062-986b-ddae9310879c"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 29 18:21:28 addons-970414 kubelet[1626]: I0829 18:21:28.838327    1626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a866f6c5-ff40-4062-986b-ddae9310879c-kube-api-access-5lxp2" (OuterVolumeSpecName: "kube-api-access-5lxp2") pod "a866f6c5-ff40-4062-986b-ddae9310879c" (UID: "a866f6c5-ff40-4062-986b-ddae9310879c"). InnerVolumeSpecName "kube-api-access-5lxp2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:21:28 addons-970414 kubelet[1626]: I0829 18:21:28.937904    1626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5lxp2\" (UniqueName: \"kubernetes.io/projected/a866f6c5-ff40-4062-986b-ddae9310879c-kube-api-access-5lxp2\") on node \"addons-970414\" DevicePath \"\""
	Aug 29 18:21:28 addons-970414 kubelet[1626]: I0829 18:21:28.937935    1626 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a866f6c5-ff40-4062-986b-ddae9310879c-tmp-dir\") on node \"addons-970414\" DevicePath \"\""
	
	
	==> storage-provisioner [fc284d6f42abd5ee85cea3d425a167f1747f738b8330187c43ca42227f77adb7] <==
	I0829 18:06:30.446216       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:06:30.457153       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:06:30.457203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:06:30.464533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:06:30.464681       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278!
	I0829 18:06:30.464679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f572a30f-1e05-4d7e-a66a-2b263d676001", APIVersion:"v1", ResourceVersion:"937", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278 became leader
	I0829 18:06:30.565419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-970414_9fb63c65-4a4b-42bf-b37e-204ce44bd278!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-970414 -n addons-970414
helpers_test.go:261: (dbg) Run:  kubectl --context addons-970414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox metrics-server-8988944d9-jss9n
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-970414 describe pod busybox metrics-server-8988944d9-jss9n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-970414 describe pod busybox metrics-server-8988944d9-jss9n: exit status 1 (60.933694ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-970414/192.168.49.2
	Start Time:       Thu, 29 Aug 2024 18:08:00 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9wnnt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9wnnt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-970414
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m16s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8988944d9-jss9n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-970414 describe pod busybox metrics-server-8988944d9-jss9n: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (326.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.418362443s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 image ls: (2.211176773s)
functional_test.go:446: expected "kicbase/echo-server:functional-108290" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.63s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 3.93
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.05
18 TestDownloadOnly/v1.31.0/DeleteAll 0.19
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.03
21 TestBinaryMirror 0.73
22 TestOffline 50.47
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 153.64
31 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/parallel/InspektorGadget 10.83
37 TestAddons/parallel/HelmTiller 8.66
39 TestAddons/parallel/CSI 55.64
40 TestAddons/parallel/Headlamp 15.28
41 TestAddons/parallel/CloudSpanner 6.45
42 TestAddons/parallel/LocalPath 50.9
43 TestAddons/parallel/NvidiaDevicePlugin 6.44
44 TestAddons/parallel/Yakd 10.71
45 TestAddons/StoppedEnableDisable 5.99
46 TestCertOptions 25.87
47 TestCertExpiration 238.34
49 TestForceSystemdFlag 26.95
50 TestForceSystemdEnv 27.92
52 TestKVMDriverInstallOrUpdate 1.22
56 TestErrorSpam/setup 19.61
57 TestErrorSpam/start 0.54
58 TestErrorSpam/status 0.83
59 TestErrorSpam/pause 1.43
60 TestErrorSpam/unpause 1.56
61 TestErrorSpam/stop 1.33
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 38.06
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.71
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
73 TestFunctional/serial/CacheCmd/cache/add_local 0.91
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 40.92
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.28
84 TestFunctional/serial/LogsFileCmd 1.29
85 TestFunctional/serial/InvalidService 4.37
87 TestFunctional/parallel/ConfigCmd 0.32
88 TestFunctional/parallel/DashboardCmd 7.72
89 TestFunctional/parallel/DryRun 0.38
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.95
95 TestFunctional/parallel/ServiceCmdConnect 18.53
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 29.14
99 TestFunctional/parallel/SSHCmd 0.48
100 TestFunctional/parallel/CpCmd 1.65
101 TestFunctional/parallel/MySQL 20.46
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.69
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
111 TestFunctional/parallel/License 0.19
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
115 TestFunctional/parallel/ImageCommands/ImageListShort 1.48
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.41
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.43
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.05
120 TestFunctional/parallel/ImageCommands/Setup 0.43
121 TestFunctional/parallel/Version/short 0.05
122 TestFunctional/parallel/Version/components 0.61
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
124 TestFunctional/parallel/MountCmd/any-port 17.65
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.71
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.92
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.89
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
131 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
132 TestFunctional/parallel/MountCmd/specific-port 1.48
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
134 TestFunctional/parallel/ProfileCmd/profile_list 0.37
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.24
142 TestFunctional/parallel/ServiceCmd/List 0.94
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.97
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.65
145 TestFunctional/parallel/ServiceCmd/Format 0.49
146 TestFunctional/parallel/ServiceCmd/URL 0.51
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 104.6
160 TestMultiControlPlane/serial/DeployApp 4.83
161 TestMultiControlPlane/serial/PingHostFromPods 0.97
162 TestMultiControlPlane/serial/AddWorkerNode 59.97
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.6
165 TestMultiControlPlane/serial/CopyFile 14.93
166 TestMultiControlPlane/serial/StopSecondaryNode 12.47
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.46
168 TestMultiControlPlane/serial/RestartSecondaryNode 20.57
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.18
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 169.87
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.27
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
173 TestMultiControlPlane/serial/StopCluster 35.46
174 TestMultiControlPlane/serial/RestartCluster 59.77
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
176 TestMultiControlPlane/serial/AddSecondaryNode 40.41
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
181 TestJSONOutput/start/Command 41.37
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.66
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.56
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.73
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
206 TestKicCustomNetwork/create_custom_network 25.93
207 TestKicCustomNetwork/use_default_bridge_network 22.42
208 TestKicExistingNetwork 21.72
209 TestKicCustomSubnet 26.74
210 TestKicStaticIP 23.04
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 48.32
215 TestMountStart/serial/StartWithMountFirst 5.58
216 TestMountStart/serial/VerifyMountFirst 0.23
217 TestMountStart/serial/StartWithMountSecond 5.59
218 TestMountStart/serial/VerifyMountSecond 0.23
219 TestMountStart/serial/DeleteFirst 1.58
220 TestMountStart/serial/VerifyMountPostDelete 0.23
221 TestMountStart/serial/Stop 1.16
222 TestMountStart/serial/RestartStopped 7.23
223 TestMountStart/serial/VerifyMountPostStop 0.23
226 TestMultiNode/serial/FreshStart2Nodes 67.28
227 TestMultiNode/serial/DeployApp2Nodes 3.81
228 TestMultiNode/serial/PingHostFrom2Pods 0.66
229 TestMultiNode/serial/AddNode 26.51
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.27
232 TestMultiNode/serial/CopyFile 8.62
233 TestMultiNode/serial/StopNode 2.05
234 TestMultiNode/serial/StartAfterStop 8.88
235 TestMultiNode/serial/RestartKeepsNodes 99.96
236 TestMultiNode/serial/DeleteNode 5.19
237 TestMultiNode/serial/StopMultiNode 23.67
238 TestMultiNode/serial/RestartMultiNode 47.17
239 TestMultiNode/serial/ValidateNameConflict 21.01
244 TestPreload 112.91
246 TestScheduledStopUnix 97.33
249 TestInsufficientStorage 9.47
250 TestRunningBinaryUpgrade 116.18
252 TestKubernetesUpgrade 334.85
253 TestMissingContainerUpgrade 152.94
255 TestStoppedBinaryUpgrade/Setup 0.51
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
257 TestNoKubernetes/serial/StartWithK8s 26.83
258 TestStoppedBinaryUpgrade/Upgrade 104.29
259 TestNoKubernetes/serial/StartWithStopK8s 11.33
260 TestNoKubernetes/serial/Start 5.12
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
262 TestNoKubernetes/serial/ProfileList 0.86
263 TestNoKubernetes/serial/Stop 1.18
264 TestNoKubernetes/serial/StartNoArgs 11.46
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
275 TestPause/serial/Start 47.14
276 TestPause/serial/SecondStartNoReconfiguration 26.12
284 TestNetworkPlugins/group/false 3.16
285 TestPause/serial/Pause 0.77
286 TestPause/serial/VerifyStatus 0.31
287 TestPause/serial/Unpause 0.69
291 TestPause/serial/PauseAgain 0.94
292 TestPause/serial/DeletePaused 3.87
293 TestPause/serial/VerifyDeletedResources 14.15
295 TestStartStop/group/old-k8s-version/serial/FirstStart 109.97
297 TestStartStop/group/no-preload/serial/FirstStart 56.44
298 TestStartStop/group/no-preload/serial/DeployApp 9.23
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.78
300 TestStartStop/group/no-preload/serial/Stop 11.8
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
302 TestStartStop/group/no-preload/serial/SecondStart 262.13
303 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
305 TestStartStop/group/old-k8s-version/serial/Stop 11.83
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
307 TestStartStop/group/old-k8s-version/serial/SecondStart 143.27
309 TestStartStop/group/embed-certs/serial/FirstStart 41.98
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.62
312 TestStartStop/group/embed-certs/serial/DeployApp 8.24
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
314 TestStartStop/group/embed-certs/serial/Stop 11.86
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/embed-certs/serial/SecondStart 261.79
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.24
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.8
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.82
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.37
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
325 TestStartStop/group/old-k8s-version/serial/Pause 2.75
327 TestStartStop/group/newest-cni/serial/FirstStart 28.84
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
330 TestStartStop/group/newest-cni/serial/Stop 1.2
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
332 TestStartStop/group/newest-cni/serial/SecondStart 12.73
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
336 TestStartStop/group/newest-cni/serial/Pause 2.53
337 TestNetworkPlugins/group/auto/Start 40.6
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
341 TestStartStop/group/no-preload/serial/Pause 2.6
342 TestNetworkPlugins/group/flannel/Start 48.72
343 TestNetworkPlugins/group/auto/KubeletFlags 0.26
344 TestNetworkPlugins/group/auto/NetCatPod 10.19
345 TestNetworkPlugins/group/auto/DNS 0.12
346 TestNetworkPlugins/group/auto/Localhost 0.1
347 TestNetworkPlugins/group/auto/HairPin 0.12
348 TestNetworkPlugins/group/enable-default-cni/Start 64.84
349 TestNetworkPlugins/group/flannel/ControllerPod 6.01
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
351 TestNetworkPlugins/group/flannel/NetCatPod 10.18
352 TestNetworkPlugins/group/flannel/DNS 0.12
353 TestNetworkPlugins/group/flannel/Localhost 0.1
354 TestNetworkPlugins/group/flannel/HairPin 0.1
355 TestNetworkPlugins/group/bridge/Start 66.79
356 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
357 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
363 TestNetworkPlugins/group/calico/Start 49.35
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
365 TestStartStop/group/embed-certs/serial/Pause 3.23
366 TestNetworkPlugins/group/kindnet/Start 45.7
367 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
368 TestNetworkPlugins/group/bridge/NetCatPod 10.2
369 TestNetworkPlugins/group/bridge/DNS 0.18
370 TestNetworkPlugins/group/bridge/Localhost 0.12
371 TestNetworkPlugins/group/bridge/HairPin 0.16
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.78
376 TestNetworkPlugins/group/custom-flannel/Start 43.95
377 TestNetworkPlugins/group/calico/ControllerPod 6.01
378 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
379 TestNetworkPlugins/group/calico/KubeletFlags 0.26
380 TestNetworkPlugins/group/calico/NetCatPod 9.18
381 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
382 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
383 TestNetworkPlugins/group/calico/DNS 0.13
384 TestNetworkPlugins/group/calico/Localhost 0.11
385 TestNetworkPlugins/group/calico/HairPin 0.12
386 TestNetworkPlugins/group/kindnet/DNS 0.13
387 TestNetworkPlugins/group/kindnet/Localhost 0.11
388 TestNetworkPlugins/group/kindnet/HairPin 0.13
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.19
391 TestNetworkPlugins/group/custom-flannel/DNS 0.12
392 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
393 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (8.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-236186 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-236186 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.57008058s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-236186
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-236186: exit status 85 (54.226262ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-236186 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |          |
	|         | -p download-only-236186        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:11.723082   32162 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:11.723208   32162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:11.723219   32162 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:11.723225   32162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:11.723424   32162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	W0829 18:05:11.723550   32162 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19531-25336/.minikube/config/config.json: open /home/jenkins/minikube-integration/19531-25336/.minikube/config/config.json: no such file or directory
	I0829 18:05:11.724092   32162 out.go:352] Setting JSON to true
	I0829 18:05:11.724982   32162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6463,"bootTime":1724948249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:11.725038   32162 start.go:139] virtualization: kvm guest
	I0829 18:05:11.727340   32162 out.go:97] [download-only-236186] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:05:11.727433   32162 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:05:11.727493   32162 notify.go:220] Checking for updates...
	I0829 18:05:11.728975   32162 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:11.730321   32162 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:11.731595   32162 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:05:11.732832   32162 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:05:11.734093   32162 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:05:11.737271   32162 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:11.737484   32162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:11.759939   32162 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:11.760060   32162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:12.085369   32162 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:05:12.07659093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:12.085467   32162 docker.go:307] overlay module found
	I0829 18:05:12.086882   32162 out.go:97] Using the docker driver based on user configuration
	I0829 18:05:12.086904   32162 start.go:297] selected driver: docker
	I0829 18:05:12.086910   32162 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:12.086984   32162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:12.131265   32162 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:05:12.123386246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:12.131474   32162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:12.131974   32162 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0829 18:05:12.132133   32162 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:12.133868   32162 out.go:169] Using Docker driver with root privileges
	I0829 18:05:12.135078   32162 cni.go:84] Creating CNI manager for ""
	I0829 18:05:12.135094   32162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0829 18:05:12.135109   32162 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 18:05:12.135182   32162 start.go:340] cluster config:
	{Name:download-only-236186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-236186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:12.136816   32162 out.go:97] Starting "download-only-236186" primary control-plane node in "download-only-236186" cluster
	I0829 18:05:12.136839   32162 cache.go:121] Beginning downloading kic base image for docker with crio
	I0829 18:05:12.138108   32162 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:05:12.138141   32162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 18:05:12.138238   32162 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:05:12.154592   32162 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:12.154762   32162 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:05:12.154868   32162 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:12.161936   32162 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:05:12.161959   32162 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:12.162082   32162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 18:05:12.163877   32162 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 18:05:12.163892   32162 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0829 18:05:12.188320   32162 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19531-25336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-236186 host does not exist
	  To start a cluster, run: "minikube start -p download-only-236186"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-236186
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (3.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-125708 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-125708 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.93411708s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (3.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-125708
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-125708: exit status 85 (52.488367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-236186 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-236186        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-236186        | download-only-236186 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only        | download-only-125708 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-125708        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:20.661678   32529 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:20.661763   32529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:20.661767   32529 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:20.661771   32529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:20.661931   32529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:05:20.662454   32529 out.go:352] Setting JSON to true
	I0829 18:05:20.663225   32529 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6472,"bootTime":1724948249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:20.663277   32529 start.go:139] virtualization: kvm guest
	I0829 18:05:20.665314   32529 out.go:97] [download-only-125708] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:20.665434   32529 notify.go:220] Checking for updates...
	I0829 18:05:20.666723   32529 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:20.667994   32529 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:20.669035   32529 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:05:20.670080   32529 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:05:20.671174   32529 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:05:20.673139   32529 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:20.673311   32529 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:20.695628   32529 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:20.695726   32529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:20.738343   32529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-29 18:05:20.730208903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:20.738445   32529 docker.go:307] overlay module found
	I0829 18:05:20.740063   32529 out.go:97] Using the docker driver based on user configuration
	I0829 18:05:20.740089   32529 start.go:297] selected driver: docker
	I0829 18:05:20.740096   32529 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:20.740182   32529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:20.785967   32529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-29 18:05:20.777879074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:20.786131   32529 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:20.786601   32529 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0829 18:05:20.786755   32529 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:20.788409   32529 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-125708 host does not exist
	  To start a cluster, run: "minikube start -p download-only-125708"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-125708
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.03s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-806390 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-806390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-806390
--- PASS: TestDownloadOnlyKic (1.03s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-708315 --alsologtostderr --binary-mirror http://127.0.0.1:45431 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-708315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-708315
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (50.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-238620 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-238620 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (48.157668607s)
helpers_test.go:175: Cleaning up "offline-crio-238620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-238620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-238620: (2.309248877s)
--- PASS: TestOffline (50.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-970414
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-970414: exit status 85 (44.349741ms)

                                                
                                                
-- stdout --
	* Profile "addons-970414" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-970414"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-970414
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-970414: exit status 85 (45.375071ms)

                                                
                                                
-- stdout --
	* Profile "addons-970414" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-970414"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (153.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-970414 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-970414 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.63668632s)
--- PASS: TestAddons/Setup (153.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-970414 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-970414 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xpbfc" [2d71d9a2-3ff0-4ff6-bb8d-5378aaa397b1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003267976s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-970414
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-970414: (5.822512915s)
--- PASS: TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.66s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.35004ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-h8shr" [53f4571a-d63e-4721-aa85-b44922772189] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.002894753s
addons_test.go:475: (dbg) Run:  kubectl --context addons-970414 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-970414 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.190472509s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.161024ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-970414 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-970414 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b71dc213-1cf5-4899-be32-5f199c2a2738] Pending
helpers_test.go:344: "task-pv-pod" [b71dc213-1cf5-4899-be32-5f199c2a2738] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b71dc213-1cf5-4899-be32-5f199c2a2738] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00397708s
addons_test.go:590: (dbg) Run:  kubectl --context addons-970414 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-970414 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-970414 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-970414 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-970414 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-970414 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-970414 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6d237053-7f4b-4ebf-8ec5-142786b9ea43] Pending
helpers_test.go:344: "task-pv-pod-restore" [6d237053-7f4b-4ebf-8ec5-142786b9ea43] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6d237053-7f4b-4ebf-8ec5-142786b9ea43] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003051564s
addons_test.go:632: (dbg) Run:  kubectl --context addons-970414 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-970414 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-970414 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.493004867s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-970414 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-bp5pg" [0f3e92d7-db30-4777-bbaf-1ba5d4344d39] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-bp5pg" [0f3e92d7-db30-4777-bbaf-1ba5d4344d39] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003351868s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 addons disable headlamp --alsologtostderr -v=1: (5.582658978s)
--- PASS: TestAddons/parallel/Headlamp (15.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-zhn4j" [8295c63a-3c41-4ee8-a117-0a94b2a76d45] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004111527s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-970414
--- PASS: TestAddons/parallel/CloudSpanner (6.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-970414 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-970414 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-970414 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7bf4aad5-fbc6-491c-b7ab-f932d727e5b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7bf4aad5-fbc6-491c-b7ab-f932d727e5b0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7bf4aad5-fbc6-491c-b7ab-f932d727e5b0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004030342s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-970414 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 ssh "cat /opt/local-path-provisioner/pvc-ca648e25-cf9d-4c60-9189-df073bc95d42_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-970414 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-970414 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.067959729s)
--- PASS: TestAddons/parallel/LocalPath (50.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-njmrn" [5c975a82-28c1-431d-b4e4-b89312486f53] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003202752s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-970414
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-b5hns" [d520598d-908d-4250-b7c2-93b81fb435d8] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003566722s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-970414 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-970414 addons disable yakd --alsologtostderr -v=1: (5.701608067s)
--- PASS: TestAddons/parallel/Yakd (10.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.99s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-970414
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-970414: (5.763738204s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-970414
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-970414
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-970414
--- PASS: TestAddons/StoppedEnableDisable (5.99s)

                                                
                                    
x
+
TestCertOptions (25.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-127011 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-127011 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.386876644s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-127011 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-127011 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-127011 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-127011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-127011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-127011: (1.88219153s)
--- PASS: TestCertOptions (25.87s)

                                                
                                    
x
+
TestCertExpiration (238.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-234976 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-234976 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.684241506s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-234976 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0829 18:54:12.699221   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-234976 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.401993515s)
helpers_test.go:175: Cleaning up "cert-expiration-234976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-234976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-234976: (2.254835321s)
--- PASS: TestCertExpiration (238.34s)

                                                
                                    
x
+
TestForceSystemdFlag (26.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-604962 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-604962 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.366297093s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-604962 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-604962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-604962
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-604962: (2.318368146s)
--- PASS: TestForceSystemdFlag (26.95s)

                                                
                                    
x
+
TestForceSystemdEnv (27.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-054859 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-054859 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.55704333s)
helpers_test.go:175: Cleaning up "force-systemd-env-054859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-054859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-054859: (2.360562761s)
--- PASS: TestForceSystemdEnv (27.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.22s)

                                                
                                    
x
+
TestErrorSpam/setup (19.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-660315 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-660315 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-660315 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-660315 --driver=docker  --container-runtime=crio: (19.613619259s)
--- PASS: TestErrorSpam/setup (19.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 stop: (1.172163373s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-660315 --log_dir /tmp/nospam-660315 stop
--- PASS: TestErrorSpam/stop (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19531-25336/.minikube/files/etc/test/nested/copy/32150/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-108290 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-108290 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.062592784s)
--- PASS: TestFunctional/serial/StartWithProxy (38.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-108290 --alsologtostderr -v=8
E0829 18:23:00.912649   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:00.919989   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:00.931344   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:00.952773   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:00.994171   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:01.075599   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:01.237223   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:01.558978   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:02.200557   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:03.482435   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:06.044534   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:11.166360   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-108290 --alsologtostderr -v=8: (27.710346695s)
functional_test.go:663: soft start took 27.711054773s for "functional-108290" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-108290 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 cache add registry.k8s.io/pause:3.3: (1.067185156s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-108290 /tmp/TestFunctionalserialCacheCmdcacheadd_local4200896728/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cache add minikube-local-cache-test:functional-108290
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cache delete minikube-local-cache-test:functional-108290
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-108290
E0829 18:23:21.408132   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (260.984004ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 kubectl -- --context functional-108290 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-108290 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-108290 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0829 18:23:41.889541   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-108290 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.916722825s)
functional_test.go:761: restart took 40.91688546s for "functional-108290" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-108290 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 logs: (1.280725466s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 logs --file /tmp/TestFunctionalserialLogsFileCmd3884268205/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 logs --file /tmp/TestFunctionalserialLogsFileCmd3884268205/001/logs.txt: (1.287017894s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-108290 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-108290
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-108290: exit status 115 (301.701447ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30312 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-108290 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 config get cpus: exit status 14 (47.946085ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 config get cpus: exit status 14 (46.827828ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-108290 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-108290 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 78048: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-108290 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-108290 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (164.407598ms)

                                                
                                                
-- stdout --
	* [functional-108290] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:24:34.615024   77338 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:24:34.615164   77338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:34.615174   77338 out.go:358] Setting ErrFile to fd 2...
	I0829 18:24:34.615180   77338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:34.615362   77338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:24:34.615887   77338 out.go:352] Setting JSON to false
	I0829 18:24:34.616835   77338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7626,"bootTime":1724948249,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:24:34.616899   77338 start.go:139] virtualization: kvm guest
	I0829 18:24:34.618767   77338 out.go:177] * [functional-108290] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:24:34.619966   77338 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:24:34.619964   77338 notify.go:220] Checking for updates...
	I0829 18:24:34.622807   77338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:24:34.624382   77338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:24:34.625956   77338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:24:34.627567   77338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:24:34.629031   77338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:24:34.631083   77338 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:24:34.631835   77338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:24:34.655747   77338 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:24:34.655884   77338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:24:34.717983   77338 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:24:34.708930181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:24:34.718089   77338 docker.go:307] overlay module found
	I0829 18:24:34.720189   77338 out.go:177] * Using the docker driver based on existing profile
	I0829 18:24:34.721399   77338 start.go:297] selected driver: docker
	I0829 18:24:34.721417   77338 start.go:901] validating driver "docker" against &{Name:functional-108290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-108290 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:24:34.721515   77338 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:24:34.723598   77338 out.go:201] 
	W0829 18:24:34.724964   77338 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 18:24:34.726360   77338 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-108290 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-108290 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-108290 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (141.891017ms)

                                                
                                                
-- stdout --
	* [functional-108290] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:24:33.035559   76049 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:24:33.035692   76049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:33.035702   76049 out.go:358] Setting ErrFile to fd 2...
	I0829 18:24:33.035708   76049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:33.036013   76049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:24:33.036557   76049 out.go:352] Setting JSON to false
	I0829 18:24:33.037721   76049 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7624,"bootTime":1724948249,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:24:33.037780   76049 start.go:139] virtualization: kvm guest
	I0829 18:24:33.040546   76049 out.go:177] * [functional-108290] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0829 18:24:33.041822   76049 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:24:33.041886   76049 notify.go:220] Checking for updates...
	I0829 18:24:33.044588   76049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:24:33.045845   76049 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:24:33.047036   76049 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:24:33.048173   76049 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:24:33.049268   76049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:24:33.050813   76049 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:24:33.051246   76049 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:24:33.074297   76049 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:24:33.074412   76049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:24:33.119176   76049 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:24:33.109759735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:24:33.119274   76049 docker.go:307] overlay module found
	I0829 18:24:33.121101   76049 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0829 18:24:33.122311   76049 start.go:297] selected driver: docker
	I0829 18:24:33.122323   76049 start.go:901] validating driver "docker" against &{Name:functional-108290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-108290 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:24:33.122422   76049 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:24:33.124311   76049 out.go:201] 
	W0829 18:24:33.125356   76049 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 18:24:33.126525   76049 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-108290 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-108290 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xkff2" [ba48c0d6-5aa5-4f43-a454-089029b1e7fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xkff2" [ba48c0d6-5aa5-4f43-a454-089029b1e7fb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.004392948s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31790
functional_test.go:1675: http://192.168.49.2:31790: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-xkff2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31790
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1e8e505d-4f1a-47c8-8120-092981d1577d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004445151s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-108290 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-108290 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-108290 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-108290 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00cef4e9-587d-4cbe-9b91-8e730c3b7d1b] Pending
helpers_test.go:344: "sp-pod" [00cef4e9-587d-4cbe-9b91-8e730c3b7d1b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00cef4e9-587d-4cbe-9b91-8e730c3b7d1b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003875655s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-108290 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-108290 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-108290 delete -f testdata/storage-provisioner/pod.yaml: (1.192570079s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-108290 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [10560e84-c06e-44cd-b99a-ccd9b373bb4a] Pending
helpers_test.go:344: "sp-pod" [10560e84-c06e-44cd-b99a-ccd9b373bb4a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [10560e84-c06e-44cd-b99a-ccd9b373bb4a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003099112s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-108290 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh -n functional-108290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cp functional-108290:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd469876716/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh -n functional-108290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh -n functional-108290 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-108290 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-95jkr" [206d0ee3-3a4f-4aff-9e69-943c4ba5793a] Pending
helpers_test.go:344: "mysql-6cdb49bbb-95jkr" [206d0ee3-3a4f-4aff-9e69-943c4ba5793a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-95jkr" [206d0ee3-3a4f-4aff-9e69-943c4ba5793a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.00433336s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-108290 exec mysql-6cdb49bbb-95jkr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-108290 exec mysql-6cdb49bbb-95jkr -- mysql -ppassword -e "show databases;": exit status 1 (98.033865ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-108290 exec mysql-6cdb49bbb-95jkr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-108290 exec mysql-6cdb49bbb-95jkr -- mysql -ppassword -e "show databases;": exit status 1 (107.934528ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-108290 exec mysql-6cdb49bbb-95jkr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/32150/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo cat /etc/test/nested/copy/32150/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/32150.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo cat /etc/ssl/certs/32150.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/32150.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo cat /usr/share/ca-certificates/32150.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/321502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo cat /etc/ssl/certs/321502.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/321502.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo cat /usr/share/ca-certificates/321502.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-108290 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh "sudo systemctl is-active docker": exit status 1 (285.559867ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh "sudo systemctl is-active containerd": exit status 1 (287.638418ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 image ls --format short --alsologtostderr: (1.479823928s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-108290 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-108290
localhost/kicbase/echo-server:functional-108290
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-108290 image ls --format short --alsologtostderr:
I0829 18:24:39.032120   78757 out.go:345] Setting OutFile to fd 1 ...
I0829 18:24:39.032226   78757 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:39.032238   78757 out.go:358] Setting ErrFile to fd 2...
I0829 18:24:39.032243   78757 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:39.032453   78757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
I0829 18:24:39.033111   78757 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:39.033219   78757 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:39.033603   78757 cli_runner.go:164] Run: docker container inspect functional-108290 --format={{.State.Status}}
I0829 18:24:39.052158   78757 ssh_runner.go:195] Run: systemctl --version
I0829 18:24:39.052225   78757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108290
I0829 18:24:39.071268   78757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/functional-108290/id_rsa Username:docker}
I0829 18:24:39.165916   78757 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 18:24:40.460836   78757 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.294882342s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-108290 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-108290  | 873f75c2988c1 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | alpine             | 0f0eda053dc5c | 44.7MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-108290  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-108290 image ls --format table --alsologtostderr:
I0829 18:24:43.117366   79384 out.go:345] Setting OutFile to fd 1 ...
I0829 18:24:43.117615   79384 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:43.117624   79384 out.go:358] Setting ErrFile to fd 2...
I0829 18:24:43.117628   79384 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:43.117802   79384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
I0829 18:24:43.118321   79384 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:43.118409   79384 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:43.118772   79384 cli_runner.go:164] Run: docker container inspect functional-108290 --format={{.State.Status}}
I0829 18:24:43.135459   79384 ssh_runner.go:195] Run: systemctl --version
I0829 18:24:43.135514   79384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108290
I0829 18:24:43.159265   79384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/functional-108290/id_rsa Username:docker}
I0829 18:24:43.345394   79384 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-108290 image ls --format json --alsologtostderr:
[{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2
d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size
":"249229937"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"9d63d525846231f826e4d262486e968231563f4ff23bcc6e44a027ccc13c90e2","repoDigests":[],"repoTags":[],"size":"1465612"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb
69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26c
d3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha25
6:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-108290"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c19136172
3f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":["docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44668625"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/bu
sybox:latest"],"size":"1462480"},{"id":"873f75c2988c19ce979f01c5337422674f49a77076f67b58734b363f26449234","repoDigests":["localhost/minikube-local-cache-test@sha256:a2df2f72ce36ed103d69dcdef4221e9742f2b1e68e101de918698660dad8538c"],"repoTags":["localhost/minikube-local-cache-test:functional-108290"],"size":"3330"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-108290 image ls --format json --alsologtostderr:
I0829 18:24:42.711080   79308 out.go:345] Setting OutFile to fd 1 ...
I0829 18:24:42.711298   79308 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:42.711306   79308 out.go:358] Setting ErrFile to fd 2...
I0829 18:24:42.711310   79308 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:42.711486   79308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
I0829 18:24:42.712025   79308 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:42.712114   79308 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:42.712439   79308 cli_runner.go:164] Run: docker container inspect functional-108290 --format={{.State.Status}}
I0829 18:24:42.729323   79308 ssh_runner.go:195] Run: systemctl --version
I0829 18:24:42.729364   79308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108290
I0829 18:24:42.750778   79308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/functional-108290/id_rsa Username:docker}
I0829 18:24:42.949607   79308 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-108290 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests:
- docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "44668625"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-108290
size: "4943877"
- id: 873f75c2988c19ce979f01c5337422674f49a77076f67b58734b363f26449234
repoDigests:
- localhost/minikube-local-cache-test@sha256:a2df2f72ce36ed103d69dcdef4221e9742f2b1e68e101de918698660dad8538c
repoTags:
- localhost/minikube-local-cache-test:functional-108290
size: "3330"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-108290 image ls --format yaml --alsologtostderr:
I0829 18:24:40.515118   78876 out.go:345] Setting OutFile to fd 1 ...
I0829 18:24:40.515272   78876 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:40.515295   78876 out.go:358] Setting ErrFile to fd 2...
I0829 18:24:40.515308   78876 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:40.515509   78876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
I0829 18:24:40.516091   78876 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:40.516214   78876 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:40.516685   78876 cli_runner.go:164] Run: docker container inspect functional-108290 --format={{.State.Status}}
I0829 18:24:40.536442   78876 ssh_runner.go:195] Run: systemctl --version
I0829 18:24:40.536496   78876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108290
I0829 18:24:40.557610   78876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/functional-108290/id_rsa Username:docker}
I0829 18:24:40.649449   78876 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh pgrep buildkitd: exit status 1 (227.489979ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image build -t localhost/my-image:functional-108290 testdata/build --alsologtostderr
2024/08/29 18:24:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 image build -t localhost/my-image:functional-108290 testdata/build --alsologtostderr: (2.860127103s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-108290 image build -t localhost/my-image:functional-108290 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9d63d525846
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-108290
--> 3136f22b05e
Successfully tagged localhost/my-image:functional-108290
3136f22b05ec0a49dd401289898a11100202bc6d62c3dac64466f978a6133424
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-108290 image build -t localhost/my-image:functional-108290 testdata/build --alsologtostderr:
I0829 18:24:40.967058   79084 out.go:345] Setting OutFile to fd 1 ...
I0829 18:24:40.967314   79084 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:40.967363   79084 out.go:358] Setting ErrFile to fd 2...
I0829 18:24:40.967375   79084 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:24:40.967714   79084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
I0829 18:24:40.968275   79084 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:40.968844   79084 config.go:182] Loaded profile config "functional-108290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:24:40.969224   79084 cli_runner.go:164] Run: docker container inspect functional-108290 --format={{.State.Status}}
I0829 18:24:40.987773   79084 ssh_runner.go:195] Run: systemctl --version
I0829 18:24:40.987825   79084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108290
I0829 18:24:41.005280   79084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/functional-108290/id_rsa Username:docker}
I0829 18:24:41.092904   79084 build_images.go:161] Building image from path: /tmp/build.151436506.tar
I0829 18:24:41.092970   79084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 18:24:41.101397   79084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.151436506.tar
I0829 18:24:41.104377   79084 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.151436506.tar: stat -c "%s %y" /var/lib/minikube/build/build.151436506.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.151436506.tar': No such file or directory
I0829 18:24:41.104399   79084 ssh_runner.go:362] scp /tmp/build.151436506.tar --> /var/lib/minikube/build/build.151436506.tar (3072 bytes)
I0829 18:24:41.127323   79084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.151436506
I0829 18:24:41.134967   79084 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.151436506 -xf /var/lib/minikube/build/build.151436506.tar
I0829 18:24:41.147312   79084 crio.go:315] Building image: /var/lib/minikube/build/build.151436506
I0829 18:24:41.147421   79084 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-108290 /var/lib/minikube/build/build.151436506 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0829 18:24:43.758326   79084 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-108290 /var/lib/minikube/build/build.151436506 --cgroup-manager=cgroupfs: (2.61087524s)
I0829 18:24:43.758405   79084 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.151436506
I0829 18:24:43.767663   79084 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.151436506.tar
I0829 18:24:43.776677   79084 build_images.go:217] Built localhost/my-image:functional-108290 from /tmp/build.151436506.tar
I0829 18:24:43.776712   79084 build_images.go:133] succeeded building to: functional-108290
I0829 18:24:43.776718   79084 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 image ls: (1.961254605s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-108290
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image load --daemon kicbase/echo-server:functional-108290 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 image load --daemon kicbase/echo-server:functional-108290 --alsologtostderr: (1.302674836s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdany-port1098718019/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724955853477736274" to /tmp/TestFunctionalparallelMountCmdany-port1098718019/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724955853477736274" to /tmp/TestFunctionalparallelMountCmdany-port1098718019/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724955853477736274" to /tmp/TestFunctionalparallelMountCmdany-port1098718019/001/test-1724955853477736274
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (291.897146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 test-1724955853477736274
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh cat /mount-9p/test-1724955853477736274
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-108290 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b8b1efb4-373d-4ff6-9a03-f3df532991e9] Pending
helpers_test.go:344: "busybox-mount" [b8b1efb4-373d-4ff6-9a03-f3df532991e9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b8b1efb4-373d-4ff6-9a03-f3df532991e9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b8b1efb4-373d-4ff6-9a03-f3df532991e9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.002974127s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-108290 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdany-port1098718019/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image load --daemon kicbase/echo-server:functional-108290 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-108290
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image load --daemon kicbase/echo-server:functional-108290 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-108290 image load --daemon kicbase/echo-server:functional-108290 --alsologtostderr: (1.14095977s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image save kicbase/echo-server:functional-108290 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image rm kicbase/echo-server:functional-108290 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-108290
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 image save --daemon kicbase/echo-server:functional-108290 --alsologtostderr
E0829 18:24:22.851456   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-108290
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-108290 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-108290 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-gvgz7" [d1112782-be87-40db-a422-8c87e22c432c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-gvgz7" [d1112782-be87-40db-a422-8c87e22c432c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003839333s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdspecific-port3003776376/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.935345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdspecific-port3003776376/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh "sudo umount -f /mount-9p": exit status 1 (250.74748ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-108290 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdspecific-port3003776376/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "302.30301ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "70.445237ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdVerifyCleanup95866189/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdVerifyCleanup95866189/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdVerifyCleanup95866189/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T" /mount1: exit status 1 (324.904679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-108290 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdVerifyCleanup95866189/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdVerifyCleanup95866189/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-108290 /tmp/TestFunctionalparallelMountCmdVerifyCleanup95866189/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "338.497191ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.82742ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-108290 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-108290 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-108290 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-108290 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 76201: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-108290 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-108290 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [66685c51-50a7-4698-ad2a-f28f4a26d941] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [66685c51-50a7-4698-ad2a-f28f4a26d941] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.025768181s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 service list -o json
functional_test.go:1494: Took "972.674873ms" to run "out/minikube-linux-amd64 -p functional-108290 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32259
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-108290 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32259
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-108290 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.66.12 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-108290 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-108290
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-108290
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-108290
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-730057 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0829 18:25:44.773728   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-730057 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m43.945014424s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (104.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-730057 -- rollout status deployment/busybox: (3.067169348s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-9wfz9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-h8sj4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-zh6pc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-9wfz9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-h8sj4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-zh6pc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-9wfz9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-h8sj4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-zh6pc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-9wfz9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-9wfz9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-h8sj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-h8sj4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-zh6pc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730057 -- exec busybox-7dff88458-zh6pc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-730057 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-730057 -v=7 --alsologtostderr: (59.185729718s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-730057 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp testdata/cp-test.txt ha-730057:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2340305065/001/cp-test_ha-730057.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057:/home/docker/cp-test.txt ha-730057-m02:/home/docker/cp-test_ha-730057_ha-730057-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test_ha-730057_ha-730057-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057:/home/docker/cp-test.txt ha-730057-m03:/home/docker/cp-test_ha-730057_ha-730057-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test_ha-730057_ha-730057-m03.txt"
E0829 18:28:00.913510   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057:/home/docker/cp-test.txt ha-730057-m04:/home/docker/cp-test_ha-730057_ha-730057-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test_ha-730057_ha-730057-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp testdata/cp-test.txt ha-730057-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2340305065/001/cp-test_ha-730057-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m02:/home/docker/cp-test.txt ha-730057:/home/docker/cp-test_ha-730057-m02_ha-730057.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test_ha-730057-m02_ha-730057.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m02:/home/docker/cp-test.txt ha-730057-m03:/home/docker/cp-test_ha-730057-m02_ha-730057-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test_ha-730057-m02_ha-730057-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m02:/home/docker/cp-test.txt ha-730057-m04:/home/docker/cp-test_ha-730057-m02_ha-730057-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test_ha-730057-m02_ha-730057-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp testdata/cp-test.txt ha-730057-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2340305065/001/cp-test_ha-730057-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m03:/home/docker/cp-test.txt ha-730057:/home/docker/cp-test_ha-730057-m03_ha-730057.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test_ha-730057-m03_ha-730057.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m03:/home/docker/cp-test.txt ha-730057-m02:/home/docker/cp-test_ha-730057-m03_ha-730057-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test_ha-730057-m03_ha-730057-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m03:/home/docker/cp-test.txt ha-730057-m04:/home/docker/cp-test_ha-730057-m03_ha-730057-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test_ha-730057-m03_ha-730057-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp testdata/cp-test.txt ha-730057-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2340305065/001/cp-test_ha-730057-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m04:/home/docker/cp-test.txt ha-730057:/home/docker/cp-test_ha-730057-m04_ha-730057.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057 "sudo cat /home/docker/cp-test_ha-730057-m04_ha-730057.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m04:/home/docker/cp-test.txt ha-730057-m02:/home/docker/cp-test_ha-730057-m04_ha-730057-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m02 "sudo cat /home/docker/cp-test_ha-730057-m04_ha-730057-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 cp ha-730057-m04:/home/docker/cp-test.txt ha-730057-m03:/home/docker/cp-test_ha-730057-m04_ha-730057-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 ssh -n ha-730057-m03 "sudo cat /home/docker/cp-test_ha-730057-m04_ha-730057-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-730057 node stop m02 -v=7 --alsologtostderr: (11.844018088s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr: exit status 7 (624.803165ms)

                                                
                                                
-- stdout --
	ha-730057
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-730057-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-730057-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-730057-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:28:24.430742  100652 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:28:24.430852  100652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:28:24.430861  100652 out.go:358] Setting ErrFile to fd 2...
	I0829 18:28:24.430866  100652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:28:24.431055  100652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:28:24.431224  100652 out.go:352] Setting JSON to false
	I0829 18:28:24.431251  100652 mustload.go:65] Loading cluster: ha-730057
	I0829 18:28:24.431376  100652 notify.go:220] Checking for updates...
	I0829 18:28:24.431751  100652 config.go:182] Loaded profile config "ha-730057": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:28:24.431772  100652 status.go:255] checking status of ha-730057 ...
	I0829 18:28:24.432202  100652 cli_runner.go:164] Run: docker container inspect ha-730057 --format={{.State.Status}}
	I0829 18:28:24.450130  100652 status.go:330] ha-730057 host status = "Running" (err=<nil>)
	I0829 18:28:24.450175  100652 host.go:66] Checking if "ha-730057" exists ...
	I0829 18:28:24.450452  100652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-730057
	I0829 18:28:24.467366  100652 host.go:66] Checking if "ha-730057" exists ...
	I0829 18:28:24.467655  100652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:28:24.467697  100652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-730057
	I0829 18:28:24.484688  100652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/ha-730057/id_rsa Username:docker}
	I0829 18:28:24.573662  100652 ssh_runner.go:195] Run: systemctl --version
	I0829 18:28:24.577519  100652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:24.587751  100652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:28:24.633505  100652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-29 18:28:24.623982381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:28:24.634198  100652 kubeconfig.go:125] found "ha-730057" server: "https://192.168.49.254:8443"
	I0829 18:28:24.634241  100652 api_server.go:166] Checking apiserver status ...
	I0829 18:28:24.634289  100652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:28:24.645232  100652 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1489/cgroup
	I0829 18:28:24.654293  100652 api_server.go:182] apiserver freezer: "2:freezer:/docker/49612578875b36477004dd8fe094e41fc54f99d4acf150d40c25e53ca95b2121/crio/crio-064b84e487af7019186524ed82e32f12cb7a7d636f1f6c85c7b1d15670174b73"
	I0829 18:28:24.654381  100652 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/49612578875b36477004dd8fe094e41fc54f99d4acf150d40c25e53ca95b2121/crio/crio-064b84e487af7019186524ed82e32f12cb7a7d636f1f6c85c7b1d15670174b73/freezer.state
	I0829 18:28:24.662275  100652 api_server.go:204] freezer state: "THAWED"
	I0829 18:28:24.662304  100652 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 18:28:24.665826  100652 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 18:28:24.665845  100652 status.go:422] ha-730057 apiserver status = Running (err=<nil>)
	I0829 18:28:24.665854  100652 status.go:257] ha-730057 status: &{Name:ha-730057 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:28:24.665875  100652 status.go:255] checking status of ha-730057-m02 ...
	I0829 18:28:24.666089  100652 cli_runner.go:164] Run: docker container inspect ha-730057-m02 --format={{.State.Status}}
	I0829 18:28:24.683963  100652 status.go:330] ha-730057-m02 host status = "Stopped" (err=<nil>)
	I0829 18:28:24.683983  100652 status.go:343] host is not running, skipping remaining checks
	I0829 18:28:24.683989  100652 status.go:257] ha-730057-m02 status: &{Name:ha-730057-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:28:24.684012  100652 status.go:255] checking status of ha-730057-m03 ...
	I0829 18:28:24.684298  100652 cli_runner.go:164] Run: docker container inspect ha-730057-m03 --format={{.State.Status}}
	I0829 18:28:24.701371  100652 status.go:330] ha-730057-m03 host status = "Running" (err=<nil>)
	I0829 18:28:24.701395  100652 host.go:66] Checking if "ha-730057-m03" exists ...
	I0829 18:28:24.701670  100652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-730057-m03
	I0829 18:28:24.717980  100652 host.go:66] Checking if "ha-730057-m03" exists ...
	I0829 18:28:24.718217  100652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:28:24.718252  100652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-730057-m03
	I0829 18:28:24.734964  100652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/ha-730057-m03/id_rsa Username:docker}
	I0829 18:28:24.821617  100652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:24.831693  100652 kubeconfig.go:125] found "ha-730057" server: "https://192.168.49.254:8443"
	I0829 18:28:24.831715  100652 api_server.go:166] Checking apiserver status ...
	I0829 18:28:24.831739  100652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:28:24.840945  100652 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	I0829 18:28:24.848760  100652 api_server.go:182] apiserver freezer: "2:freezer:/docker/366eb7b983f24efa12b5f33479d5f46112d5322c6d756a703f4f05c3ed375be3/crio/crio-51ba3c685ce5603e724fa276cf283af664a66fe97badac51848e0c0a1d97c690"
	I0829 18:28:24.848829  100652 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/366eb7b983f24efa12b5f33479d5f46112d5322c6d756a703f4f05c3ed375be3/crio/crio-51ba3c685ce5603e724fa276cf283af664a66fe97badac51848e0c0a1d97c690/freezer.state
	I0829 18:28:24.856004  100652 api_server.go:204] freezer state: "THAWED"
	I0829 18:28:24.856025  100652 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 18:28:24.859508  100652 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 18:28:24.859534  100652 status.go:422] ha-730057-m03 apiserver status = Running (err=<nil>)
	I0829 18:28:24.859542  100652 status.go:257] ha-730057-m03 status: &{Name:ha-730057-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:28:24.859558  100652 status.go:255] checking status of ha-730057-m04 ...
	I0829 18:28:24.859783  100652 cli_runner.go:164] Run: docker container inspect ha-730057-m04 --format={{.State.Status}}
	I0829 18:28:24.876652  100652 status.go:330] ha-730057-m04 host status = "Running" (err=<nil>)
	I0829 18:28:24.876671  100652 host.go:66] Checking if "ha-730057-m04" exists ...
	I0829 18:28:24.876980  100652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-730057-m04
	I0829 18:28:24.893582  100652 host.go:66] Checking if "ha-730057-m04" exists ...
	I0829 18:28:24.893817  100652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:28:24.893848  100652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-730057-m04
	I0829 18:28:24.910155  100652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/ha-730057-m04/id_rsa Username:docker}
	I0829 18:28:25.001301  100652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:25.012270  100652 status.go:257] ha-730057-m04 status: &{Name:ha-730057-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 node start m02 -v=7 --alsologtostderr
E0829 18:28:28.616903   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-730057 node start m02 -v=7 --alsologtostderr: (19.699208166s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.178117208s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (169.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-730057 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-730057 -v=7 --alsologtostderr
E0829 18:29:12.699533   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:12.705911   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:12.717269   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:12.738637   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:12.780032   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:12.861501   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:13.022999   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:13.344488   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:13.985958   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:15.267251   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:17.829191   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:22.950998   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:33.193299   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-730057 -v=7 --alsologtostderr: (36.523936838s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-730057 --wait=true -v=7 --alsologtostderr
E0829 18:29:53.674670   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:30:34.636276   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-730057 --wait=true -v=7 --alsologtostderr: (2m13.259739843s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-730057
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (169.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 node delete m03 -v=7 --alsologtostderr
E0829 18:31:56.557815   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-730057 node delete m03 -v=7 --alsologtostderr: (10.548555044s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-730057 stop -v=7 --alsologtostderr: (35.359590293s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr: exit status 7 (98.13946ms)

                                                
                                                
-- stdout --
	ha-730057
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-730057-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-730057-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:32:39.210924  118435 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:32:39.211022  118435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:39.211026  118435 out.go:358] Setting ErrFile to fd 2...
	I0829 18:32:39.211031  118435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:39.211184  118435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:32:39.211495  118435 out.go:352] Setting JSON to false
	I0829 18:32:39.211559  118435 mustload.go:65] Loading cluster: ha-730057
	I0829 18:32:39.211632  118435 notify.go:220] Checking for updates...
	I0829 18:32:39.212896  118435 config.go:182] Loaded profile config "ha-730057": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:32:39.212931  118435 status.go:255] checking status of ha-730057 ...
	I0829 18:32:39.213427  118435 cli_runner.go:164] Run: docker container inspect ha-730057 --format={{.State.Status}}
	I0829 18:32:39.233595  118435 status.go:330] ha-730057 host status = "Stopped" (err=<nil>)
	I0829 18:32:39.233620  118435 status.go:343] host is not running, skipping remaining checks
	I0829 18:32:39.233629  118435 status.go:257] ha-730057 status: &{Name:ha-730057 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:39.233656  118435 status.go:255] checking status of ha-730057-m02 ...
	I0829 18:32:39.233996  118435 cli_runner.go:164] Run: docker container inspect ha-730057-m02 --format={{.State.Status}}
	I0829 18:32:39.250874  118435 status.go:330] ha-730057-m02 host status = "Stopped" (err=<nil>)
	I0829 18:32:39.250895  118435 status.go:343] host is not running, skipping remaining checks
	I0829 18:32:39.250902  118435 status.go:257] ha-730057-m02 status: &{Name:ha-730057-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:39.250923  118435 status.go:255] checking status of ha-730057-m04 ...
	I0829 18:32:39.251195  118435 cli_runner.go:164] Run: docker container inspect ha-730057-m04 --format={{.State.Status}}
	I0829 18:32:39.267535  118435 status.go:330] ha-730057-m04 host status = "Stopped" (err=<nil>)
	I0829 18:32:39.267554  118435 status.go:343] host is not running, skipping remaining checks
	I0829 18:32:39.267560  118435 status.go:257] ha-730057-m04 status: &{Name:ha-730057-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-730057 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0829 18:33:00.912835   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-730057 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (59.038611795s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-730057 --control-plane -v=7 --alsologtostderr
E0829 18:34:12.699723   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-730057 --control-plane -v=7 --alsologtostderr: (39.622871569s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-730057 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-175083 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0829 18:34:40.399491   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-175083 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (41.366278366s)
--- PASS: TestJSONOutput/start/Command (41.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-175083 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-175083 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-175083 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-175083 --output=json --user=testUser: (5.726749275s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-939919 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-939919 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.097821ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f75be1e-2c1e-4cfc-9c98-69b067cecdf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-939919] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2f09130-0fb2-4ca4-806a-e25bc7d86967","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"90df4b76-dbed-4335-a2fe-d6fd85d9bcb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4d07cb09-d071-44e4-a822-339fa6460131","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig"}}
	{"specversion":"1.0","id":"10f8c567-3140-4eb8-98cc-0db506a731c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube"}}
	{"specversion":"1.0","id":"497925c3-59e1-4d23-a9f1-e8861bc7476e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a385e335-d937-4817-9dd6-474caecf5c76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8dadb06a-b374-4817-bb64-b481afb82c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-939919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-939919
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-874128 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-874128 --network=: (23.915588843s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-874128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-874128
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-874128: (1.999609361s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-211984 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-211984 --network=bridge: (20.551859884s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-211984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-211984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-211984: (1.850786296s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.42s)

                                                
                                    
x
+
TestKicExistingNetwork (21.72s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-301709 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-301709 --network=existing-network: (19.758395396s)
helpers_test.go:175: Cleaning up "existing-network-301709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-301709
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-301709: (1.828165238s)
--- PASS: TestKicExistingNetwork (21.72s)

                                                
                                    
x
+
TestKicCustomSubnet (26.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-010448 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-010448 --subnet=192.168.60.0/24: (24.775040293s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-010448 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-010448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-010448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-010448: (1.950517431s)
--- PASS: TestKicCustomSubnet (26.74s)

                                                
                                    
x
+
TestKicStaticIP (23.04s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-352809 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-352809 --static-ip=192.168.200.200: (20.865812664s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-352809 ip
helpers_test.go:175: Cleaning up "static-ip-352809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-352809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-352809: (2.056128977s)
--- PASS: TestKicStaticIP (23.04s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (48.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-785817 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-785817 --driver=docker  --container-runtime=crio: (22.881078253s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-788481 --driver=docker  --container-runtime=crio
E0829 18:38:00.912792   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-788481 --driver=docker  --container-runtime=crio: (20.471748057s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-785817
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-788481
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-788481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-788481
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-788481: (1.784733802s)
helpers_test.go:175: Cleaning up "first-785817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-785817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-785817: (2.163422493s)
--- PASS: TestMinikubeProfile (48.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-576931 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-576931 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.58045019s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-576931 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-596337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-596337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.590032903s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-596337 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-576931 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-576931 --alsologtostderr -v=5: (1.576779553s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-596337 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-596337
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-596337: (1.163815694s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-596337
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-596337: (6.230965949s)
--- PASS: TestMountStart/serial/RestartStopped (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-596337 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549766 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0829 18:39:12.699060   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:39:23.978459   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549766 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.853007491s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-549766 -- rollout status deployment/busybox: (2.519876763s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-2h448 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-hdn7x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-2h448 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-hdn7x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-2h448 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-hdn7x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-2h448 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-2h448 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-hdn7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549766 -- exec busybox-7dff88458-hdn7x -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-549766 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-549766 -v 3 --alsologtostderr: (25.9316419s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-549766 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp testdata/cp-test.txt multinode-549766:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3058414242/001/cp-test_multinode-549766.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766:/home/docker/cp-test.txt multinode-549766-m02:/home/docker/cp-test_multinode-549766_multinode-549766-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m02 "sudo cat /home/docker/cp-test_multinode-549766_multinode-549766-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766:/home/docker/cp-test.txt multinode-549766-m03:/home/docker/cp-test_multinode-549766_multinode-549766-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m03 "sudo cat /home/docker/cp-test_multinode-549766_multinode-549766-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp testdata/cp-test.txt multinode-549766-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3058414242/001/cp-test_multinode-549766-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766-m02:/home/docker/cp-test.txt multinode-549766:/home/docker/cp-test_multinode-549766-m02_multinode-549766.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766 "sudo cat /home/docker/cp-test_multinode-549766-m02_multinode-549766.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766-m02:/home/docker/cp-test.txt multinode-549766-m03:/home/docker/cp-test_multinode-549766-m02_multinode-549766-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m03 "sudo cat /home/docker/cp-test_multinode-549766-m02_multinode-549766-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp testdata/cp-test.txt multinode-549766-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3058414242/001/cp-test_multinode-549766-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766-m03:/home/docker/cp-test.txt multinode-549766:/home/docker/cp-test_multinode-549766-m03_multinode-549766.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766 "sudo cat /home/docker/cp-test_multinode-549766-m03_multinode-549766.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 cp multinode-549766-m03:/home/docker/cp-test.txt multinode-549766-m02:/home/docker/cp-test_multinode-549766-m03_multinode-549766-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 ssh -n multinode-549766-m02 "sudo cat /home/docker/cp-test_multinode-549766-m03_multinode-549766-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-549766 node stop m03: (1.167290127s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549766 status: exit status 7 (444.05215ms)

                                                
                                                
-- stdout --
	multinode-549766
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-549766-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-549766-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr: exit status 7 (435.719715ms)

                                                
                                                
-- stdout --
	multinode-549766
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-549766-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-549766-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:40:21.450399  183387 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:40:21.450631  183387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:40:21.450638  183387 out.go:358] Setting ErrFile to fd 2...
	I0829 18:40:21.450642  183387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:40:21.450846  183387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:40:21.451002  183387 out.go:352] Setting JSON to false
	I0829 18:40:21.451026  183387 mustload.go:65] Loading cluster: multinode-549766
	I0829 18:40:21.451072  183387 notify.go:220] Checking for updates...
	I0829 18:40:21.451357  183387 config.go:182] Loaded profile config "multinode-549766": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:40:21.451370  183387 status.go:255] checking status of multinode-549766 ...
	I0829 18:40:21.451732  183387 cli_runner.go:164] Run: docker container inspect multinode-549766 --format={{.State.Status}}
	I0829 18:40:21.468443  183387 status.go:330] multinode-549766 host status = "Running" (err=<nil>)
	I0829 18:40:21.468483  183387 host.go:66] Checking if "multinode-549766" exists ...
	I0829 18:40:21.468819  183387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549766
	I0829 18:40:21.485418  183387 host.go:66] Checking if "multinode-549766" exists ...
	I0829 18:40:21.485686  183387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:40:21.485718  183387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549766
	I0829 18:40:21.502835  183387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/multinode-549766/id_rsa Username:docker}
	I0829 18:40:21.589958  183387 ssh_runner.go:195] Run: systemctl --version
	I0829 18:40:21.593937  183387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:40:21.603884  183387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:40:21.652094  183387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-29 18:40:21.642822038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:40:21.652662  183387 kubeconfig.go:125] found "multinode-549766" server: "https://192.168.67.2:8443"
	I0829 18:40:21.652692  183387 api_server.go:166] Checking apiserver status ...
	I0829 18:40:21.652786  183387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:40:21.662643  183387 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1453/cgroup
	I0829 18:40:21.670786  183387 api_server.go:182] apiserver freezer: "2:freezer:/docker/49e0c8da1597459ea3b3e9f4929deb669cc2f8369bcf0d6e91767872fcd00842/crio/crio-b20762d43fd391fc4db0b970f1472a338cc8fad94f56c7a77d4d392b62e05088"
	I0829 18:40:21.670837  183387 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/49e0c8da1597459ea3b3e9f4929deb669cc2f8369bcf0d6e91767872fcd00842/crio/crio-b20762d43fd391fc4db0b970f1472a338cc8fad94f56c7a77d4d392b62e05088/freezer.state
	I0829 18:40:21.678220  183387 api_server.go:204] freezer state: "THAWED"
	I0829 18:40:21.678256  183387 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0829 18:40:21.681804  183387 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0829 18:40:21.681823  183387 status.go:422] multinode-549766 apiserver status = Running (err=<nil>)
	I0829 18:40:21.681833  183387 status.go:257] multinode-549766 status: &{Name:multinode-549766 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:40:21.681849  183387 status.go:255] checking status of multinode-549766-m02 ...
	I0829 18:40:21.682077  183387 cli_runner.go:164] Run: docker container inspect multinode-549766-m02 --format={{.State.Status}}
	I0829 18:40:21.698989  183387 status.go:330] multinode-549766-m02 host status = "Running" (err=<nil>)
	I0829 18:40:21.699009  183387 host.go:66] Checking if "multinode-549766-m02" exists ...
	I0829 18:40:21.699223  183387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549766-m02
	I0829 18:40:21.715658  183387 host.go:66] Checking if "multinode-549766-m02" exists ...
	I0829 18:40:21.715917  183387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:40:21.715948  183387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549766-m02
	I0829 18:40:21.732327  183387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19531-25336/.minikube/machines/multinode-549766-m02/id_rsa Username:docker}
	I0829 18:40:21.817336  183387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:40:21.827190  183387 status.go:257] multinode-549766-m02 status: &{Name:multinode-549766-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:40:21.827228  183387 status.go:255] checking status of multinode-549766-m03 ...
	I0829 18:40:21.827474  183387 cli_runner.go:164] Run: docker container inspect multinode-549766-m03 --format={{.State.Status}}
	I0829 18:40:21.844087  183387 status.go:330] multinode-549766-m03 host status = "Stopped" (err=<nil>)
	I0829 18:40:21.844107  183387 status.go:343] host is not running, skipping remaining checks
	I0829 18:40:21.844113  183387 status.go:257] multinode-549766-m03 status: &{Name:multinode-549766-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-549766 node start m03 -v=7 --alsologtostderr: (8.250604077s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-549766
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-549766
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-549766: (24.607758821s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549766 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549766 --wait=true -v=8 --alsologtostderr: (1m15.265736647s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-549766
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-549766 node delete m03: (4.654554899s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-549766 stop: (23.51950046s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549766 status: exit status 7 (75.070453ms)

                                                
                                                
-- stdout --
	multinode-549766
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-549766-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr: exit status 7 (76.729957ms)

                                                
                                                
-- stdout --
	multinode-549766
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-549766-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:42:39.512268  193128 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:42:39.512384  193128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:42:39.512395  193128 out.go:358] Setting ErrFile to fd 2...
	I0829 18:42:39.512400  193128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:42:39.512600  193128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:42:39.512847  193128 out.go:352] Setting JSON to false
	I0829 18:42:39.512882  193128 mustload.go:65] Loading cluster: multinode-549766
	I0829 18:42:39.512985  193128 notify.go:220] Checking for updates...
	I0829 18:42:39.513282  193128 config.go:182] Loaded profile config "multinode-549766": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:42:39.513300  193128 status.go:255] checking status of multinode-549766 ...
	I0829 18:42:39.513685  193128 cli_runner.go:164] Run: docker container inspect multinode-549766 --format={{.State.Status}}
	I0829 18:42:39.531152  193128 status.go:330] multinode-549766 host status = "Stopped" (err=<nil>)
	I0829 18:42:39.531172  193128 status.go:343] host is not running, skipping remaining checks
	I0829 18:42:39.531179  193128 status.go:257] multinode-549766 status: &{Name:multinode-549766 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:42:39.531218  193128 status.go:255] checking status of multinode-549766-m02 ...
	I0829 18:42:39.531445  193128 cli_runner.go:164] Run: docker container inspect multinode-549766-m02 --format={{.State.Status}}
	I0829 18:42:39.547152  193128 status.go:330] multinode-549766-m02 host status = "Stopped" (err=<nil>)
	I0829 18:42:39.547176  193128 status.go:343] host is not running, skipping remaining checks
	I0829 18:42:39.547184  193128 status.go:257] multinode-549766-m02 status: &{Name:multinode-549766-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549766 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0829 18:43:00.912608   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549766 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (46.630155106s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549766 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-549766
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549766-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-549766-m02 --driver=docker  --container-runtime=crio: exit status 14 (60.329528ms)

                                                
                                                
-- stdout --
	* [multinode-549766-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-549766-m02' is duplicated with machine name 'multinode-549766-m02' in profile 'multinode-549766'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549766-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549766-m03 --driver=docker  --container-runtime=crio: (18.832639885s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-549766
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-549766: exit status 80 (253.551778ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-549766 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-549766-m03 already exists in multinode-549766-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-549766-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-549766-m03: (1.825418627s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.01s)

                                                
                                    
x
+
TestPreload (112.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-869972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0829 18:44:12.699606   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-869972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m18.897278903s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-869972 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-869972 image pull gcr.io/k8s-minikube/busybox: (2.01156543s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-869972
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-869972: (5.694428296s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-869972 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0829 18:45:35.761132   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-869972 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (23.863699071s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-869972 image list
helpers_test.go:175: Cleaning up "test-preload-869972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-869972
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-869972: (2.221704721s)
--- PASS: TestPreload (112.91s)

                                                
                                    
x
+
TestScheduledStopUnix (97.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-206145 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-206145 --memory=2048 --driver=docker  --container-runtime=crio: (21.230865013s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-206145 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-206145 -n scheduled-stop-206145
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-206145 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-206145 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-206145 -n scheduled-stop-206145
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-206145
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-206145 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-206145
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-206145: exit status 7 (61.344999ms)

                                                
                                                
-- stdout --
	scheduled-stop-206145
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-206145 -n scheduled-stop-206145
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-206145 -n scheduled-stop-206145: exit status 7 (60.247139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-206145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-206145
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-206145: (4.886741493s)
--- PASS: TestScheduledStopUnix (97.33s)

                                                
                                    
x
+
TestInsufficientStorage (9.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-998208 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-998208 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.187446786s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9456fcb-c31e-4f01-a141-3f7915ddff37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-998208] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c50fd41b-8362-42ff-8514-57df7bd74f83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"acf2a799-dfce-4a7a-810d-107a83babf26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"24448a19-5415-444d-9af4-afc1c230d8bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig"}}
	{"specversion":"1.0","id":"039b3812-acbf-48f5-a064-e7f2ccf2cd29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube"}}
	{"specversion":"1.0","id":"2bfab5d7-e3e3-467a-8e9f-c9c81a6799c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6f02521a-3108-4bf1-b57b-3260cdb041c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c4770990-791a-40b9-bc0a-29e73ace5493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"407027ab-6db2-4113-b5a5-83ae0a1fa549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ff40feb6-ddb7-4390-b840-3ec6e029888f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec7a4d07-ceef-4875-8a5f-ecc6b31cd077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dbfc651f-ff04-4332-85f7-f5dd09d83d70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-998208\" primary control-plane node in \"insufficient-storage-998208\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3e54592-d12e-44be-ae9b-00b6a232d2b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724775115-19521 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d6fa4e5-b95a-415c-aa12-c12d83ee40d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"453437ea-a7bd-41a6-a46f-4fc2adc0bdb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-998208 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-998208 --output=json --layout=cluster: exit status 7 (245.863982ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-998208","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-998208","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:47:29.284443  215576 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-998208" does not appear in /home/jenkins/minikube-integration/19531-25336/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-998208 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-998208 --output=json --layout=cluster: exit status 7 (252.872624ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-998208","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-998208","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:47:29.537685  215674 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-998208" does not appear in /home/jenkins/minikube-integration/19531-25336/kubeconfig
	E0829 18:47:29.547281  215674 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/insufficient-storage-998208/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-998208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-998208
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-998208: (1.783035948s)
--- PASS: TestInsufficientStorage (9.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (116.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.584549892 start -p running-upgrade-737027 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.584549892 start -p running-upgrade-737027 --memory=2200 --vm-driver=docker  --container-runtime=crio: (43.378952956s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-737027 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-737027 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.751781519s)
helpers_test.go:175: Cleaning up "running-upgrade-737027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-737027
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-737027: (2.617154393s)
--- PASS: TestRunningBinaryUpgrade (116.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (334.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.367852285s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-400035
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-400035: (1.224020305s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-400035 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-400035 status --format={{.Host}}: exit status 7 (61.261196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.263110253s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-400035 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (76.967797ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-400035] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-400035
	    minikube start -p kubernetes-upgrade-400035 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4000352 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-400035 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-400035 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (10.668628918s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-400035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-400035
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-400035: (2.109031726s)
--- PASS: TestKubernetesUpgrade (334.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3892541079 start -p missing-upgrade-265362 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3892541079 start -p missing-upgrade-265362 --memory=2200 --driver=docker  --container-runtime=crio: (1m17.957765541s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-265362
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-265362: (14.789159662s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-265362
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-265362 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0829 18:49:12.699819   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/functional-108290/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-265362 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.800010189s)
helpers_test.go:175: Cleaning up "missing-upgrade-265362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-265362
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-265362: (4.580565805s)
--- PASS: TestMissingContainerUpgrade (152.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251600 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-251600 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (68.309657ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-251600] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251600 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251600 --driver=docker  --container-runtime=crio: (26.462856793s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-251600 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2196226435 start -p stopped-upgrade-260805 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2196226435 start -p stopped-upgrade-260805 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m17.725118711s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2196226435 -p stopped-upgrade-260805 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2196226435 -p stopped-upgrade-260805 stop: (2.189010438s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-260805 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-260805 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.372928208s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251600 --no-kubernetes --driver=docker  --container-runtime=crio
E0829 18:48:00.913627   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251600 --no-kubernetes --driver=docker  --container-runtime=crio: (9.218028933s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-251600 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-251600 status -o json: exit status 2 (251.234603ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-251600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-251600
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-251600: (1.863153277s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251600 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251600 --no-kubernetes --driver=docker  --container-runtime=crio: (5.116652574s)
--- PASS: TestNoKubernetes/serial/Start (5.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-251600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-251600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.705914ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-251600
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-251600: (1.177205411s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (11.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-251600 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-251600 --driver=docker  --container-runtime=crio: (11.462516646s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (11.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-251600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-251600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.765812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-260805
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestPause/serial/Start (47.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-136011 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-136011 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (47.139280801s)
--- PASS: TestPause/serial/Start (47.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-136011 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-136011 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.107420555s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-723984 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-723984 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (132.954632ms)

                                                
                                                
-- stdout --
	* [false-723984] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:50:30.675596  258479 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:50:30.675728  258479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:50:30.675739  258479 out.go:358] Setting ErrFile to fd 2...
	I0829 18:50:30.675743  258479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:50:30.675965  258479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-25336/.minikube/bin
	I0829 18:50:30.676560  258479 out.go:352] Setting JSON to false
	I0829 18:50:30.677742  258479 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9182,"bootTime":1724948249,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:50:30.677799  258479 start.go:139] virtualization: kvm guest
	I0829 18:50:30.679850  258479 out.go:177] * [false-723984] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:50:30.681130  258479 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:50:30.681149  258479 notify.go:220] Checking for updates...
	I0829 18:50:30.683522  258479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:50:30.684917  258479 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-25336/kubeconfig
	I0829 18:50:30.686334  258479 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-25336/.minikube
	I0829 18:50:30.687548  258479 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:50:30.688703  258479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:50:30.690337  258479 config.go:182] Loaded profile config "force-systemd-env-054859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:50:30.690428  258479 config.go:182] Loaded profile config "kubernetes-upgrade-400035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:50:30.690540  258479 config.go:182] Loaded profile config "pause-136011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:50:30.690608  258479 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:50:30.711909  258479 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:50:30.712015  258479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:50:30.756895  258479 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:76 SystemTime:2024-08-29 18:50:30.74776739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:50:30.756997  258479 docker.go:307] overlay module found
	I0829 18:50:30.759134  258479 out.go:177] * Using the docker driver based on user configuration
	I0829 18:50:30.760576  258479 start.go:297] selected driver: docker
	I0829 18:50:30.760588  258479 start.go:901] validating driver "docker" against <nil>
	I0829 18:50:30.760598  258479 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:50:30.762728  258479 out.go:201] 
	W0829 18:50:30.763920  258479 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0829 18:50:30.765103  258479 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-723984 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-723984" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:49:27 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-400035
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.103.2:8443
name: pause-136011
contexts:
- context:
cluster: kubernetes-upgrade-400035
user: kubernetes-upgrade-400035
name: kubernetes-upgrade-400035
- context:
cluster: pause-136011
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-136011
name: pause-136011
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-400035
user:
client-certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/kubernetes-upgrade-400035/client.crt
client-key: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/kubernetes-upgrade-400035/client.key
- name: pause-136011
user:
client-certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/pause-136011/client.crt
client-key: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/pause-136011/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-723984

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-723984"

                                                
                                                
----------------------- debugLogs end: false-723984 [took: 2.851575746s] --------------------------------
helpers_test.go:175: Cleaning up "false-723984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-723984
--- PASS: TestNetworkPlugins/group/false (3.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-136011 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-136011 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-136011 --output=json --layout=cluster: exit status 2 (306.931682ms)

                                                
                                                
-- stdout --
	{"Name":"pause-136011","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-136011","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-136011 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-136011 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-136011 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-136011 --alsologtostderr -v=5: (3.868244545s)
--- PASS: TestPause/serial/DeletePaused (3.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.094224421s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-136011
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-136011: exit status 1 (16.367749ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-136011: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-045539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-045539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (1m49.974251931s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-522230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-522230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (56.440121884s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-522230 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e6a3f20-2415-4015-b835-dfc157af2451] Pending
helpers_test.go:344: "busybox" [6e6a3f20-2415-4015-b835-dfc157af2451] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e6a3f20-2415-4015-b835-dfc157af2451] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003940047s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-522230 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-522230 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-522230 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-522230 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-522230 --alsologtostderr -v=3: (11.804646801s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-522230 -n no-preload-522230
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-522230 -n no-preload-522230: exit status 7 (61.802177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-522230 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-522230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-522230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m21.828719517s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-522230 -n no-preload-522230
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-045539 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [59a8df82-e0e2-4d52-9482-ab0b35e6addf] Pending
helpers_test.go:344: "busybox" [59a8df82-e0e2-4d52-9482-ab0b35e6addf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [59a8df82-e0e2-4d52-9482-ab0b35e6addf] Running
E0829 18:53:00.913297   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003620435s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-045539 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-045539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-045539 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-045539 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-045539 --alsologtostderr -v=3: (11.827097767s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-045539 -n old-k8s-version-045539
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-045539 -n old-k8s-version-045539: exit status 7 (72.267308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-045539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-045539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-045539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.976416006s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-045539 -n old-k8s-version-045539
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-101305 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-101305 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (41.984326072s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-358437 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-358437 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (41.621185571s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-101305 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ccd97beb-c868-4806-8af4-814e76d6e68f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ccd97beb-c868-4806-8af4-814e76d6e68f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004213193s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-101305 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-101305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-101305 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-101305 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-101305 --alsologtostderr -v=3: (11.860530241s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-101305 -n embed-certs-101305
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-101305 -n embed-certs-101305: exit status 7 (74.029331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-101305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (261.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-101305 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-101305 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m21.495777763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-101305 -n embed-certs-101305
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (261.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-358437 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [81ee81ad-f926-4c42-8bb7-9cff5ff6392d] Pending
helpers_test.go:344: "busybox" [81ee81ad-f926-4c42-8bb7-9cff5ff6392d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [81ee81ad-f926-4c42-8bb7-9cff5ff6392d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003433875s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-358437 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-358437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-358437 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-358437 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-358437 --alsologtostderr -v=3: (11.817414729s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437: exit status 7 (61.458963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-358437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-358437 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-358437 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m21.988641702s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bd5gx" [21ae54f2-3aef-40df-b257-a4d94ce34978] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004324783s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bd5gx" [21ae54f2-3aef-40df-b257-a4d94ce34978] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004395414s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-045539 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-045539 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-045539 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-045539 -n old-k8s-version-045539
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-045539 -n old-k8s-version-045539: exit status 2 (307.335832ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-045539 -n old-k8s-version-045539
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-045539 -n old-k8s-version-045539: exit status 2 (299.673687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-045539 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-045539 -n old-k8s-version-045539
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-045539 -n old-k8s-version-045539
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-580714 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0829 18:56:03.980440   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-580714 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (28.836931803s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-580714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-580714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.042360563s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-580714 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-580714 --alsologtostderr -v=3: (1.200538178s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-580714 -n newest-cni-580714
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-580714 -n newest-cni-580714: exit status 7 (60.673392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-580714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-580714 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-580714 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (12.406471284s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-580714 -n newest-cni-580714
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-580714 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-580714 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-580714 -n newest-cni-580714
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-580714 -n newest-cni-580714: exit status 2 (279.593207ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-580714 -n newest-cni-580714
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-580714 -n newest-cni-580714: exit status 2 (282.674111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-580714 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-580714 -n newest-cni-580714
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-580714 -n newest-cni-580714
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.599032462s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wsb8l" [d6bbc53f-e39e-4b38-956a-c100fc116ffb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003196319s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wsb8l" [d6bbc53f-e39e-4b38-956a-c100fc116ffb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003418408s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-522230 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-522230 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-522230 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-522230 -n no-preload-522230
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-522230 -n no-preload-522230: exit status 2 (285.199515ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-522230 -n no-preload-522230
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-522230 -n no-preload-522230: exit status 2 (279.786103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-522230 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-522230 -n no-preload-522230
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-522230 -n no-preload-522230
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.715785414s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-723984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-723984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6vzxx" [9381dcbc-4303-45ca-b854-e80fe286ab2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6vzxx" [9381dcbc-4303-45ca-b854-e80fe286ab2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003808262s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-723984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0829 18:57:58.898405   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/old-k8s-version-045539/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:58:00.913197   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/addons-970414/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:58:01.460429   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/old-k8s-version-045539/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.84421875s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nkbkh" [ffc90a9e-7d0f-467a-9221-56bfffc8c550] Running
E0829 18:58:06.581797   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/old-k8s-version-045539/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004086272s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-723984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-723984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nfsl2" [c718b2f5-4dd1-4b84-9377-1dd19db0d35b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nfsl2" [c718b2f5-4dd1-4b84-9377-1dd19db0d35b] Running
E0829 18:58:16.823829   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/old-k8s-version-045539/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003618714s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-723984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m6.7915989s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-723984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-723984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r2skv" [f8d31e6b-6448-4840-95fa-4606a2dcfe78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r2skv" [f8d31e6b-6448-4840-95fa-4606a2dcfe78] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003724088s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-723984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-chj6t" [fb9e25d3-b935-48dd-ab47-47d1769165c3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004689315s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-chj6t" [fb9e25d3-b935-48dd-ab47-47d1769165c3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003723211s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-101305 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.34737195s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-101305 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-101305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-101305 -n embed-certs-101305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-101305 -n embed-certs-101305: exit status 2 (279.734658ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-101305 -n embed-certs-101305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-101305 -n embed-certs-101305: exit status 2 (283.264707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-101305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-101305 --alsologtostderr -v=1: (1.012548536s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-101305 -n embed-certs-101305
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-101305 -n embed-certs-101305
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.697167362s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-723984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-723984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qwxmw" [c9087188-45fb-426d-92a8-d1ba81981386] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qwxmw" [c9087188-45fb-426d-92a8-d1ba81981386] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.002977763s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-723984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hxvc4" [484f0ea1-a37e-4dbd-aa1d-89a63253693a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003358534s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hxvc4" [484f0ea1-a37e-4dbd-aa1d-89a63253693a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00349529s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-358437 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-358437 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-358437 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437: exit status 2 (295.404077ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437: exit status 2 (294.976697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-358437 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-358437 -n default-k8s-diff-port-358437
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-723984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (43.952348909s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vwx5f" [c1086b6c-16b8-44cf-98e5-6a110a05a8c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003998588s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6twqx" [051ab39d-57f9-48c0-8716-c4aef5ed368b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004157838s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-723984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-723984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2phs4" [15bbd944-66e2-4323-ac27-8058d8a652ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2phs4" [15bbd944-66e2-4323-ac27-8058d8a652ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003918707s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-723984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-723984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zbgf6" [6e86fb46-ea89-491d-87c7-9b40563f70bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zbgf6" [6e86fb46-ea89-491d-87c7-9b40563f70bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003890465s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-723984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-723984 exec deployment/netcat -- nslookup kubernetes.default
E0829 19:00:40.189065   32150 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/old-k8s-version-045539/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-723984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-723984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pdknt" [a661623d-28ee-4809-acf1-f009ac00084a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pdknt" [a661623d-28ee-4809-acf1-f009ac00084a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003784373s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-723984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-723984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-848567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-848567
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-723984 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-723984" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-env-054859
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:49:27 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-400035
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.103.2:8443
name: pause-136011
contexts:
- context:
cluster: force-systemd-env-054859
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: force-systemd-env-054859
name: force-systemd-env-054859
- context:
cluster: kubernetes-upgrade-400035
user: kubernetes-upgrade-400035
name: kubernetes-upgrade-400035
- context:
cluster: pause-136011
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-136011
name: pause-136011
current-context: force-systemd-env-054859
kind: Config
preferences: {}
users:
- name: force-systemd-env-054859
user:
client-certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/force-systemd-env-054859/client.crt
client-key: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/force-systemd-env-054859/client.key
- name: kubernetes-upgrade-400035
user:
client-certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/kubernetes-upgrade-400035/client.crt
client-key: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/kubernetes-upgrade-400035/client.key
- name: pause-136011
user:
client-certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/pause-136011/client.crt
client-key: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/pause-136011/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-723984

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-723984"

                                                
                                                
----------------------- debugLogs end: kubenet-723984 [took: 2.907759675s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-723984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-723984
--- SKIP: TestNetworkPlugins/group/kubenet (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-723984 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-723984" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:49:27 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-400035
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-25336/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.103.2:8443
name: pause-136011
contexts:
- context:
cluster: kubernetes-upgrade-400035
user: kubernetes-upgrade-400035
name: kubernetes-upgrade-400035
- context:
cluster: pause-136011
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:50:15 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-136011
name: pause-136011
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-400035
user:
client-certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/kubernetes-upgrade-400035/client.crt
client-key: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/kubernetes-upgrade-400035/client.key
- name: pause-136011
user:
client-certificate: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/pause-136011/client.crt
client-key: /home/jenkins/minikube-integration/19531-25336/.minikube/profiles/pause-136011/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-723984

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-723984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-723984"

                                                
                                                
----------------------- debugLogs end: cilium-723984 [took: 5.168812257s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-723984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-723984
--- SKIP: TestNetworkPlugins/group/cilium (5.31s)

                                                
                                    
Copied to clipboard